{"conf": "unix", "generated_at": "2026-04-26T08:00:02.954878Z", "threads": [{"num": 1, "subject": "intro", "response_count": 19, "posts": [{"response": 1, "author": "Viper", "date": "Fri, Apr 25, 1997 (22:26)", "body": "Mind my ignorance but, what is unix? I'm trying to find out what language I need to know to write programs for DOS or Windows."}, {"response": 2, "author": "spif", "date": "Sun, May 18, 1997 (23:50)", "body": "UNIX is a term used to describe a type of operating system, basically. As far as languages for writing programs for DOS or Windows, C and C++ are pretty widely used for that purpose. Coincidentally, C and C++ can be (and are) used to write programs for UNIX. You can probably find some decent books and/or courses on C and/or C++ in your area."}, {"response": 3, "author": "deejoe", "date": "Sat, May 24, 1997 (13:32)", "body": "Just thought I'd drop by to say 'hi'. This is (hopefully) my first successful shot at posting something to the Spring, though I've tried several times to post before, maybe this particular access route will work out OK. More later if I get some success."}, {"response": 4, "author": "deejoe", "date": "Sat, May 24, 1997 (13:41)", "body": "Ahh, very nice, it's working well (or I'm working well with it, whichever). Anyway, I got some of my earliest UNIX experience trying to move around the WELL during the year-year and a half I had an account there. As a grad student at the Univ. of Rochester, I pushed to have grads given the same unix email shell account access that undergrads had been given. Then, our lab got a nice new Indy for use in processing and analyzing crystal X-ray diffraction data and I had a machine mostly to myself to 'play' on and learn unix. I've played around a bit with Linux, limited mostly to getting it to boot on various machines that have too little memory and disk space to run a full system on them. And over the last year, I've been administering another IRIX box used for various data analysis and molecular modelling applications. And so, it's appropriate that I've come full circle over the last several years, logging in via a shell account to the unix conference on a YAPP system."}, {"response": 5, "author": "terry", "date": "Sun, May 25, 1997 (11:16)", "body": "If you know any other WELL alums, let them know that they are welcome here and I'll be happy to give out more shell accounts. Today I started a \"buddhist\" conference as a test of yapp's ability to tie in to email lists."}, {"response": 6, "author": "deejoe", "date": "Tue, Jun 10, 1997 (13:30)", "body": "Thanks, Terry. I appreciate the shell account, though I'll be a bit rusty for awhile learning and relearning things."}, {"response": 7, "author": "terry", "date": "Tue, Jun 10, 1997 (19:04)", "body": "We'll help you along. Let us know what you need. Glad you're joining us. What do you know about UNIX? What is your experience with online community and the web? Today we got a brand new version of Yapp, Dave Thaler's been at work."}, {"response": 8, "author": "seth", "date": "Mon, Jun 30, 1997 (11:16)", "body": "I am a Network Integrator and have been looking for UNIX software to print MICR encoded checks originating from a UNIX accounting package. Web searches have turned up a few VERY expensive packages but nothing in the Shareware world. Do you know of anyone who might be able to help me? Thanks"}, {"response": 9, "author": "Charlotte", "date": "Mon, Aug  3, 1998 (13:06)", "body": "I'm a Unix systems administrator. Not a guru, but maybe I can help with some of the more fundamental questions."}, {"response": 10, "author": "terry", "date": "Tue, Aug  4, 1998 (12:51)", "body": "Would you like to be on our Spring UNIX team?"}, {"response": 11, "author": "Charlotte", "date": "Tue, Aug  4, 1998 (14:04)", "body": "What would that entail? I usually enjoy being on any kind of team. :) And from what I've seen so far, youse guys are winners."}, {"response": 12, "author": "KitchenManager", "date": "Tue, Aug  4, 1998 (16:48)", "body": "I think most of that will probably entail fixing my booboos..."}, {"response": 13, "author": "terry", "date": "Tue, Aug  4, 1998 (17:33)", "body": "It would involve helping keep the system running. The main sysad is Jeff Kramer jeff@spring.net x"}, {"response": 14, "author": "ratthing", "date": "Tue, Aug  4, 1998 (17:54)", "body": "hey terry, if you need any Unix help, you can count me in as well."}, {"response": 15, "author": "Charlotte", "date": "Tue, Aug  4, 1998 (18:41)", "body": "Well, I am happy to do whatever I can do. Tell Jeff to put me on the team. (I wonder if we get Team jerseys?) :)"}, {"response": 16, "author": "KitchenManager", "date": "Tue, Aug  4, 1998 (23:59)", "body": "Can they say, \"The Spring's World Domination Tour\"?"}, {"response": 17, "author": "spif", "date": "Mon, Aug 17, 1998 (20:03)", "body": "I'd be happy to help in an advisory capacity. Not that I've ever asked permission to give you advice on sysadminly matters in the past, Terry ;) e-mail me."}, {"response": 18, "author": "Charlotte", "date": "Mon, Aug 17, 1998 (20:58)", "body": "Well, I must admit...working on this team is the easiest gig I've ever had. :)"}, {"response": 19, "author": "terry", "date": "Tue, Aug 18, 1998 (08:02)", "body": "Thanks Bryan, and it will get harder when we bring the new machine online and start putting up lots of new dedicated websites! unix conference Main Menu"}]}, {"num": 10, "subject": "Incredibly stupid unix questions", "response_count": 1, "posts": [{"response": 1, "author": "CotC", "date": "Mon, Aug  3, 1998 (11:30)", "body": "Don't know how helpful these might be, but: http://www.softlab.ntua.gr/cgi-bin/man-cgi unix conference Main Menu"}]}, {"num": 11, "subject": "UNIX utilities", "response_count": 2, "posts": [{"response": 1, "author": "ian", "date": "Mon, May 19, 1997 (22:12)", "body": "You can get a complete set of UNIX utilities for any PC (MS-DOS, Win 3.x, Win95, Win NT, OS/2) and for IBM mainframes (S/390 or P/390 running MVS/ESA , VM/ESA, or VSE/ESA). For the PCs, I recommend MKS Toolkit ( http://www.mks.com/solution/tk/ ). You can also get a less complete (but free) set of utilities if you use the GNU utilities ( ftp://prep.ai.mit/edu/ ). For mainframes, the utilites are included as part of the operating system software, in the \"OpenEdition\" ( http://www.s390.ibm.com/products/oe/ ), which IBM supplies with MVS, VM and VSE. The mainframe C compiler is an ANSI compiler, and I have ported code from UNIX to VM/ESA with only a very small proportion of changes, and plan to use OpenEdition for similar work in the future. My experience to date is that the commercial UNIX utitilies for non-UNIX computers provides a very portable working environment across a range of different platforms."}, {"response": 2, "author": "CotC", "date": "Tue, Jun  2, 1998 (12:32)", "body": "Check out the U/WIN package at www.kornshell.com There's also a port of the entire GNU development environment at www.cygnus.com , but I can't remember the exact URL. Just do a search for cygwin. unix conference Main Menu"}]}, {"num": 12, "subject": "sendmail", "response_count": 8, "posts": [{"response": 1, "author": "terry", "date": "Tue, Jun 10, 1997 (12:39)", "body": "How do you get sendmail to issue to notificationi alerts of incoming mail when you're at shell prompt? It does this on barton.spring.com but ont on our www.spring.com machine. Or is this a function of the user account?"}, {"response": 2, "author": "terry", "date": "Tue, Jun 10, 1997 (12:41)", "body": "Here's the party line (from the BSDI man page): Sendmail sends a message to one or more recipients, routing the message over whatever networks are necessary. Sendmail does internetwork for- warding as necessary to deliver the message to the correct place. Sendmail is not intended as a user interface routine; other programs pro- vide user-friendly front ends; sendmail is used only to deliver pre-for- matted messages. With no flags, sendmail reads its standard input up to an end-of-file or a line consisting only of a single dot and sends a copy of the message found there to all of the addresses listed. It determines the network(s) to use based on the syntax and contents of the addresses. Local addresses are looked up in a file and aliased appropriately. Aliasing can be prevented by preceding the address with a backslash. Normally the sender is not included in any alias expansions, e.g., if `john' sends to `group', and `group' includes `john' in the expansion, then the letter will not be delivered to `john'."}, {"response": 3, "author": "terry", "date": "Fri, Jun 13, 1997 (12:59)", "body": "Why is it that I get email notification on barton.spring.com but not on www.spring.com. The notification pops up when I am sitting arond in a shell, like this: New mail for terry@barton.spring.com has arrived: ---- From: Domain Registration Role Account Subject: Re: [NIC-970610.13373] WEBZINE.COM The following template has been returned due to the following errors. Please review the instructions in the domain registration template available at ftp://rs.internic.net/templates/domain-template.txt . The glossary of the parser errors is available at: ...more... New mail for terry@barton.spring.com has arrived: ---- Subject: Re: 20\" Monitor From: Rojo@apple.com, Laura Terry, >Also, who at Apple would be the decisionmaker on sponsoring a >community conferencing system like the Spring? barton:~ etc..."}, {"response": 4, "author": "tedchong", "date": "Fri, Jun 13, 1997 (19:46)", "body": "Terry, you have to run a shell that can alert you new mails or run a program in your startup file that can do the same. If you run csh or tcsh (use chsh to change), it will promopt \"you have new mail(s)\" when new emails arrive after a command is completed. If you like to have some parts of your new email shown, you have to put \"biff y\" on your .bashrc or .login or .cshrc file. Or you can use \"newmail -i 90\" to check for new mails every 90 secs in one of the 3 files mentioned. See \"man biff\" and man \"newmail\" for more details."}, {"response": 5, "author": "terry", "date": "Sat, Jun 14, 1997 (08:50)", "body": "Yeah, I'm running bash. No reason. It's just the one of always used. What shell do you run and why?"}, {"response": 6, "author": "tedchong", "date": "Sat, Jun 14, 1997 (23:57)", "body": "I like tcsh, just like you like apple I like orange. Well, if you use bash, be sure to put \"biff y\" in youe .bashrc file."}, {"response": 7, "author": "terry", "date": "Sun, Jun 15, 1997 (01:11)", "body": "Why?"}, {"response": 8, "author": "tedchong", "date": "Mon, Jun 16, 1997 (00:26)", "body": "putting \"biff y\" in your .bashrc file will notify you if new email is arriving... unix conference Main Menu"}]}, {"num": 13, "subject": "calendar", "response_count": 0, "posts": []}, {"num": 14, "subject": "vi", "response_count": 8, "posts": [{"response": 1, "author": "stacey", "date": "Sat, Apr 11, 1998 (03:41)", "body": "but it is NOT very user friendly!"}, {"response": 2, "author": "CotC", "date": "Mon, Aug  3, 1998 (12:28)", "body": "http://www.math.fu-berlin.de/~guckes/vi/"}, {"response": 3, "author": "terry", "date": "Wed, Jun 23, 1999 (13:28)", "body": "How do you do globabl substitutions with vi? I know to substitute one thing you use s /old/new And the man page says: [range] s[ubstitute] [/pattern/replace/] [options] [count] [flags] [range] & [options] [count] [flags] [range] ~ [options] [count] [flags] Make substitutions."}, {"response": 4, "author": "terry", "date": "Wed, Jun 23, 1999 (14:07)", "body": "3.0 - How do you do a search and replace? Well, there are a few methods. The simplest is: :s/old/new/g But, this only does it on the current line... So: :%s/old/new/g In general: :[range]s/old/new/[cgi] Where [range] is any line range, including line numbers, $ (end of file), . (current location), % (current file), or just two numbers with a dash between them. (Or even: .,+5 to mean the next five lines). [cgi] is either c, g, i, or nothing. c tells vi to prompt you before the changes, g to change all of the occurrences on a line. (type yes to tell vi to change it.) i tells vi to be case insensitive on the search. The g after the last slash tells it to replace more than just the first occurrence on each line. Another method is: :g/foobar/s/bar/baz/g This searches for foobar, and changes it to foobaz. It will leave jailbars alone, which the other method will not. Unfortunately, if jailbars appears on the same line as foobar, it will change, too. Of course you can also use regular expression search patterns, and a few other commands in the replacement part of the text. If you use \\( and \\) in the pattern to escape a sequence (and use \\1, \\2, etc., you can do lots of nifty things. For example: :g/foo/s/^\\([^ ]*\\) \\([^ ]*\\)/\\2 \\1/ will swap the first and second words on every line containing \"foo\". Special sequences allowed are: & everything which was matched by the search \\[1-9] The contents of the 1st-9th \\(\\) pair \\u The next character will be made uppercase \\U The characters until \\e or \\E will be made uppercase \\l The next character will be made lowercase \\L The characters until \\e or \\E will be made lowercase \\[eE] end the selection for making upper or lowercase"}, {"response": 5, "author": "terry", "date": "Wed, Nov 10, 1999 (11:54)", "body": "That search and replace was so useful today! /%s/wearfree/poolgoods/g"}, {"response": 6, "author": "MarciaH", "date": "Wed, Nov 10, 1999 (21:58)", "body": "Everything back in order yet? Never came in here before. Very Cool and extremely edgy!!!"}, {"response": 7, "author": "MarciaH", "date": "Wed, Nov 10, 1999 (22:14)", "body": "Some URLs I picked up in my efforts to learn vi The Vi/Ex Editor http://www.networkcomputing.com/unixworld/tutorial/009/009.html Beginners guide to Unix, vi, and X_Windows http://www-jics.cs.utk.edu/I2UNIX/unix_guide/unix_guide.html Editing Files using vi http://www.mhpcc.edu/training/vitecbids/UnixIntro/Editors.html#vi Vi Text Editor http://www.ms.washington.edu/help/editors/vi.html"}, {"response": 8, "author": "terry", "date": "Sat, Dec 29, 2001 (12:43)", "body": ":%s/href=\"[^\"]*\"/\\L&/ makes urls lowercase. unix conference Main Menu"}]}, {"num": 15, "subject": "AIX", "response_count": 2, "posts": [{"response": 1, "author": "CotC", "date": "Mon, Aug  3, 1998 (11:33)", "body": "Handy Reference Info for my fellow AIXers: http://techlib.austin.ibm.com/techlib/manuals/adoclib/aixgen/wbinfnav/cmdsreft.htm The above is a command reference, but you can clamber about the directory tree at will..."}, {"response": 2, "author": "CotC", "date": "Mon, Aug  3, 1998 (11:35)", "body": "Here's another AIX-specific reference site. Don't know if'n you can get to this one from outside the firewall. I can't try it from home. I killed Win95 once yet again last night... http://www.austin.ibm.com/cgi-bin/ds_form?config=/usr/share/man/info/en_US/a_doc_lib/data/base.cfg unix conference Main Menu"}]}, {"num": 16, "subject": "subnets", "response_count": 1, "posts": [{"response": 1, "author": "terry", "date": "Tue, Sep 15, 1998 (16:34)", "body": "A great learning tool for getting started with subnets: http://www.ccci.com/subcalc/subcalc.htm unix conference Main Menu"}]}, {"num": 17, "subject": "emacs", "response_count": 7, "posts": [{"response": 1, "author": "ratthing", "date": "Fri, Nov  6, 1998 (10:02)", "body": "...and a way of life!"}, {"response": 2, "author": "tami", "date": "Mon, Nov 23, 1998 (12:20)", "body": "Yup. Emacs is on Spring's machines too. So far the best working one is on www. I guess I should try M-x gnus"}, {"response": 3, "author": "terry", "date": "Mon, Nov 23, 1998 (14:39)", "body": "You mean on access.spring.net don't you?"}, {"response": 4, "author": "terry", "date": "Tue, Nov 24, 1998 (08:19)", "body": "When they refer to the Meta key in emacs, is this the alt key? For example, from the faq: 1: What do these mean: C-h, M-C-a, RET, \"ESC a\", etc.? C-x means press the `x' key while holding down the Control key. M-x means press the `x' key while holding down the Meta key. M-C-x means press the `x' key while holding down both the Control key and the Meta key. C-M-a is a synonym for M-C-a. RET, LFD, DEL, ESC, and TAB respectively refer to pressing the Return, Linefeed (aka Newline), Delete, Escape, and Tab keys and are equivalent to C-m, C-j, C-?, C-[, Key sequences longer than one key (and some single-key sequences) are inside double quotes or on lines by themselves. Any real spaces in such a key sequence should be ignored; only SPC really means press the space key. The ASCII code sent by C-x (except for C-?) is the value that would be sent by pressing just `x' minus 96 (or 64 for uppercase `X') and will be from 0 to 31. The ASCII code sent by M-x is the sum of 128 and the ASCII code that would be sent by pressing just the `x' key. Essentially, the Control key turns off bits 5 and 6 and the Meta key turns on bit 7. For further information, see `Characters' and `Keys' in the on-line manual. NOTE: C-? (aka DEL) is ASCII code 127. It is a misnomer to call C-? a \"control\" key, since 127 has both bits 5 and 6 turned ON. Also, on very few keyboards does Control-? generate ASCII code 127."}, {"response": 5, "author": "terry", "date": "Tue, Nov 24, 1998 (08:19)", "body": "The biggest problem I have so far with emacs is that it uses the alt key a lot, which brings up menu options in crt. The Control key combinations work fine. I suppose I could remap the alt key in CRT, as CRT allows this. As I mentioned, emacs refers to the alt key as the meta key."}, {"response": 6, "author": "daniel", "date": "Sun, Dec  6, 1998 (16:41)", "body": "OK guys...when and where are you going to install LINUX? Perhaps spring.net should consider offering some (for a minor fee) some shell accounts? Or a dedicated Linux box so interested geeks could dial into?"}, {"response": 7, "author": "KitchenManager", "date": "Sat, Jan  2, 1999 (14:09)", "body": "Terry can set you up a shell account, or I can come over to the Austin house and set one up for you, but it's gonna be on BSDI... (I don't currently have access to a computer that I can telnet in from...) unix conference Main Menu"}]}, {"num": 18, "subject": "unix training on the web", "response_count": 4, "posts": [{"response": 1, "author": "terry", "date": "Mon, Jan  4, 1999 (11:13)", "body": "Here's a real basic place to start: http://www.ksu.edu/cns/isc/Unix/Unix_A/log_on.html I skipped the first few screens which dealt with KU specific type stuff."}, {"response": 2, "author": "terry", "date": "Mon, Jan  4, 1999 (11:38)", "body": "And another real good one: http://www.nmt.edu/tcc/help/unix/unix_cmd.html"}, {"response": 3, "author": "CotC", "date": "Mon, Jan  4, 1999 (11:42)", "body": "Unix Guru Universe Josh's Linux Guide UNIXWorld Tutorials SunExpert Archives etc."}, {"response": 4, "author": "terry", "date": "Mon, Jan  4, 1999 (15:57)", "body": "Good stuff, lil' lummox. unix conference Main Menu"}]}, {"num": 19, "subject": "tar command", "response_count": 7, "posts": [{"response": 1, "author": "terry", "date": "Thu, Jan 20, 2000 (09:07)", "body": "tar cvf /dev/rct0 /home This command writes a tar archive to the tape device /dev/rct0. It copies the files in the /home directory, and all subdirectories of /home to the tape device. tar cvf /dev/fd0 /home/fred This command writes a tar archive to the diskette device /dev/fd0. It copies the files in the /home/fred directory, and all subdirectories of /home/fred to the diskette. tar cvf /tmp/home.tar /home This command creates a tar archive named /tmp/home.tar. The tar command copies the files in the /home directory, and all subdirectories of /home. tar cvf /tmp/home.tar /home compress /tmp/home.tar This example shows two commands issued in sequence. The first command creates a tar archive named /tmp/home.tar. The second command compresses the tar archive, and replaces it with a new compressed tar archive, named /tmp/home.tar.Z. See the compress command for more details."}, {"response": 2, "author": "terry", "date": "Thu, Jan 20, 2000 (09:09)", "body": "Tarball is a jargon term for a tar archive, suggesting \"a bunch of files stuck together in a ball of tar.\""}, {"response": 3, "author": "terry", "date": "Thu, Jan 20, 2000 (09:10)", "body": "TAR Tape ARchive utility To archive (compress) files: tar -cvf [tar-file] [files-to-archive] To extract files: tar -xvf [tar-file] SWITCH: -c Compress -x Extract -v Verbose mode on -f Work with local disk file instead of tape drive Compressing a directory, tar cvf - | gzip > directory.tgz Quick de-compression of a compressed, tar file gzip -cd file.tgz | tar xvf -"}, {"response": 4, "author": "terry", "date": "Thu, Oct 11, 2001 (12:15)", "body": "tar -cf - files | compress > tarfile one line that compreses tar files."}, {"response": 5, "author": "terry", "date": "Sun, Sep 15, 2002 (19:17)", "body": "Command tar Description The \"tar\" command stands for tape archive. This command is used to create new archives, list files in existing archives, and extract files from archives. The tar command can be used to write archives directly to tape devices, or you can use it to create archive files on disk. In many cases, tar archives are created on disk so it's easier to transport them across networks, such as the Internet. Note - the tar command does not compress files. Use the compress command to compress the tar archive after you've created it. Examples tar cvf /dev/rct0 /home This command writes a tar archive to the tape device /dev/rct0. It copies the files in the /home directory, and all subdirectories of /home to the tape device. tar cvf /dev/fd0 /home/fred This command writes a tar archive to the diskette device /dev/fd0. It copies the files in the /home/fred directory, and all subdirectories of /home/fred to the diskette. tar cvf /tmp/home.tar /home This command creates a tar archive named /tmp/home.tar. The tar command copies the files in the /home directory, and all subdirectories of /home. tar cvf /tmp/home.tar /home compress /tmp/home.tar This example shows two commands issued in sequence. The first command creates a tar archive named /tmp/home.tar. The second command compresses the tar archive, and replaces it with a new compressed tar archive, named /tmp/home.tar.Z. See the compress command for more details."}, {"response": 6, "author": "terry", "date": "Tue, Sep 17, 2002 (11:17)", "body": "What do I do with a .tar file? A .tar.gz file? TAR is a UNIX command that allows you to create a single archive file containing many files. Such archiving allows you to maintain directory relationships and facilitates transferring complex programs with many separate but integrated parts that must have their relationships preserved. TAR has a plethora of options that allow you to do archiving and unpacking in many ways. However, for the purpose of unpacking CGI applications, the commands will be fairly simple. The files on our site are now GZipped (.tar.gz). That just means we compressed them with GNU GZip. Your browser should be able to download it and recognize the file without any problems. Unpacking on UNIX tar xvfpz file_name.tar.gz or tar xvfz file_name.tar (if \"p\" won't work) TAR will go through the archive file and extract each individual directory and file, expanding them into their appropriate places beneath the current directory. The \"xvfzp\" letters in the TAR command above are parameters that instruct the program to decompress the files and then extract the files and directories out of the \".tar\" file. If you are not using GNU TAR, you will need to add a step to the process: gunzip file_name.tar.gz (removes the .gz) tar xvfp file_name.tar or tar xvf file_name.tar (if \"p\" won't work) Tar Extraction Parameters: Parameter Description x Tells tar to extract the files. v Tells tar to output information about the status of its extraction while it is performing the work. f Informs tar to use the \".tar\" filename as the source of the files to be extracted. The reason the \"f\" parameter has to be used is that tar, by default, archives files and directories to a tape drive. TAR is actually short for \"[T]ape [AR]chive\". p Notes that the original permissions should be maintained. z Instructs TAR to decompress a file first. Unpacking on Windows and Mac If you are not using a UNIX-based web server, or donIf you use a Windows-based text editor however, you need to be very careful about accidentally inserting platform-specific, invisible control characters (like carriage return characters) into the files. If you are editing the files on a Windows box, this is often a problem because Windows programs are well-known for their desire to insert Windows-only characters into files. You will know that invisible characters have infected the files if you get a 500 Server Error when trying to run the application from the web, and error messages like the following if you run the application from the command line: Illegal character \\015 (carriage return) at app_name.cgi line 2. or Can't find string terminator \" [some text here]\" anywhere before EOF Generally, this problem can be solved either by choosing a text editor that does not insert the characters or by setting your FTP program to upload edited files to the web server machine using \"ASCII mode\" instead of \"BINARY mode\". You should be able to set the FTP program to transfer in ASCII mode using the program's preferences. We recommend using WS_FTP that has this functionality and is available at http://www.shareware.com/ . However, if the files have already been sent over to a UNIX-based web server, you can strip bad characters using: find . -type f -exec perl -pi -e 's|\\cM||' {} \\;"}, {"response": 7, "author": "terry", "date": "Sun, Oct 13, 2002 (20:48)", "body": "How many times have you encountered tar files that include a full path to every file instead of the relative paths (which make extracting the files into your chosen directory easier)? The paths included in the tar file depend, of course, on the commands originally used to create the tar file. Tar files that include full paths can be troublesome to use because they sometimes \"want\" to be extracted into a file system that doesn't have enough free space to accommodate them. Most of us don't want to spend time extracting portions of the files and moving them to the correct location. The reason for using a tar file in the first place is, after all, so that you get a collection of files all with the correct relationship to each other and ready to be replicated anywhere. Since tar files have a special format, processing them so that the paths within the files will be removed isn't an easy task. What I normally do when I have to deal with one of these files is create a symbolic link that looks like the intended directory but diverts the extraction to the location where I want the files. For example, let's say that I am given a tar file with contents such as this: $ tar tvf eg.tar -rwxr-xr-x 1111/14 514 Jul 12 09:31 2001 /opt/bin/wiglet/pics/coreadm -rw-r--r-- 1111/14 17408 Aug 17 16:39 2001 /opt/bin/wiglet/pics/eg.tar -rwxr-xr-x 1111/14 4199 Jul 12 09:10 2001 /opt/bin/wiglet/pics/eg1 -rwxr-xr-x 1111/14 22208 Jul 12 10:53 2001 /opt/bin/wiglet/pics/eg10 -rwxr-xr-x 1111/14 172123 Jul 12 10:54 2001 /opt/bin/wiglet/pics/eg11 -rwxr-xr-x 1111/14 762392 Jul 12 10:59 2001 /opt/bin/wiglet/pics/eg12 -rwxr-xr-x 1111/14 485164 Jul 12 09:13 2001 /opt/bin/wiglet/pics/eg2 -rwxr-xr-x 1111/14 943145 Jul 12 09:21 2001 /opt/bin/wiglet/pics/eg3 -rwxr-xr-x 1111/14 843267 Jul 12 09:28 2001 /opt/bin/wiglet/pics/eg4 -rwxr-xr-x 1111/14 383048 Jul 12 10:29 2001 /opt/bin/wiglet/pics/eg5 -rwxr-xr-x 1111/14 38457 Jul 12 10:33 2001 /opt/bin/wiglet/pics/eg6 -rwxr-xr-x 1111/14 832156 Jul 12 10:35 2001 /opt/bin/wiglet/pics/eg7 -rwxr-xr-x 1111/14 102368 Jul 12 10:36 2001 /opt/bin/wiglet/pics/eg8 -rwxr-xr-x 1111/14 5555153 Jul 12 10:52 2001 /opt/bin/wiglet/pics/eg9 -rwxr--r-- 1111/14 881959 Jul 12 09:52 2001 /opt/bin/wiglet/apps/powertool If I don't have room in /opt for these files and really want them in /usr/local, I can't simply untar the tar file. Instead, I create a symbolic link for the wiglet directory like this: # cd /opt/bin # mkdir /usr/local/wiglet # ln -s /usr/local/wiglet . Afterwards, I can untar my file and the contents land in /usr/local/wiglet instead of /opt/bin/wiglet. When you create a tar file, the directory structure will reflect the location from which the tar file was created unless, of course, you specify the full path in your command. You can be inside the /opt/bin/wiglet directory and type: # tar cvf wiglet.tar /opt/bin/wiglet and you'll end up with a tar file with full paths included. Typing this instead: # tar cvf wiglet.tar . will create a tar file that includes paths starting with \"./\". The command: # tar cvf wiglet.tar * will omit the harmless \"./\". If the directory specified in the ill-behaved tar file already exists on your system, workarounds are that much more of problem. If you can, move the current same-named directory out of the way temporarily, create the symlink, extract the files, and then put everything back the way it was. It's probably possible to write a program that would replace the paths in a tar file with paths more to your liking. However, such a program would have to consider the checksums built into tar. A careless replacement would probably yield a file that could not be read -- at least not by tar. If you can't move the current directory because it's in use, another option is to extract the tar file on another system and tar it up correctly (i.e., with relative paths). Then, you can use the file on the intended system without having to go through any special contortions from http://www.itworld.com/nl/unix_sys_adm/08222001/ unix conference Main Menu"}]}, {"num": 2, "subject": "Linux", "response_count": 31, "posts": [{"response": 1, "author": "terry", "date": "Sun, Feb  9, 1997 (21:20)", "body": "It's len ux as opposed to lie nix as I hear tell."}, {"response": 2, "author": "terry", "date": "Sun, Feb  9, 1997 (21:20)", "body": "Caldera Open Linux distribution is announced. $59 now, $300 later. Like the latest Red Hat, it includes the 2.x kernel with SMP capability. I understand that Dejanews, with its huge daily hit rate, is run on SMP Linux boxes."}, {"response": 3, "author": "ian", "date": "Thu, Feb 27, 1997 (22:29)", "body": "I use Linux (Slackware from InfoMagic, but other suppliers and versions are good), along with OS/2, Windows 95, and Windows 3.1/DOS. At work, I use NT. The company I am working for is switching to Linux for development -- we supply one set of software to run on about 8 or so UNIX platforms, all PC platforms, DEC VMS and IBM mainframes. We develop in UNIX (soon, RedHat Linux) and port to the other platforms. From my own experience, I think Linux is easier to use, more efficient than, and more reliable than other PC operating systems. Up to now, however, Linux has been at a severe disadvantage in terms of the availability of off-the-shelf software. Thus, we will find it advantageous to use Linux for R&D, but are not ready to use it for administration. This may change quite rapidly -- Linux was not a strong contender two years ago, and was only MINIX five years ago. Caveat Microsoft!"}, {"response": 4, "author": "cacman", "date": "Mon, Mar 10, 1997 (07:53)", "body": "Linux is still weak on off-the-shelf software (OK, we have Applixware from RedHat, but it's far from enough), but this will change in the near future because a lot of non-tech people is discovering that Linux is a great OS. Long live Linux!"}, {"response": 5, "author": "tedchong", "date": "Fri, Apr 25, 1997 (00:22)", "body": "I use both Linux and FreeBSD, but found Linux better for people using DOS previously. I don't need to run GUI so I find Linux the best interm of speed and reliability."}, {"response": 6, "author": "terry", "date": "Sun, May 25, 1997 (11:18)", "body": "How does it stack up against the BSDI we use here, Ted? You've been aroudn our shell."}, {"response": 7, "author": "tedchong", "date": "Fri, May 30, 1997 (21:19)", "body": "I do have 2 BSDI systems around here (in the office) but I think nothing beats Linux (or FreeBSD) as it is free and easy to setup. I find Linux is good for almost everything from a small company web server to a busy medium scale ISP."}, {"response": 8, "author": "terry", "date": "Sat, May 31, 1997 (11:48)", "body": "I'm going to drift a little. Do you know the step by step procedure to add a 3mb scsi hard drive to barton.spring.com? As you know, we're real short on hard disk space right now. barton:~ df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/sd0a 9727 5459 3781 59% / /dev/sd0f 705727 78737 591703 12% /home /dev/sd0h 198335 177773 10645 94% /usr /dev/sd0g 63535 57297 3061 95% /var barton:~"}, {"response": 9, "author": "tedchong", "date": "Sun, Jun  1, 1997 (09:42)", "body": "Terry, which directory is short of space on barton? /home is only 12% used, still have about 600MB left :-)"}, {"response": 10, "author": "terry", "date": "Mon, Jun  2, 1997 (08:20)", "body": "/var and /usr are both real full. I need to add to /var because that's where a bunch of mail keeps overflowing and filling up the hard drive. I could use a lot more room there. I'm thinking about plugging in a 3 gb Quantum and setting it up as the second drive on www. Any tips on upgrading that system (step by step procedure). I guess the first would be to plug it in and run BSDI's disk formatting program. Then link it to /var as a filesystem."}, {"response": 11, "author": "tedchong", "date": "Mon, Jun  2, 1997 (09:15)", "body": "For short run you can link /var to /home since /home has 600MB of space. To do this, just run run on shell: mkdir /home/var ; ln -s /home/var /var make sure /var is not there in the first place."}, {"response": 12, "author": "terry", "date": "Mon, Jun  2, 1997 (11:22)", "body": "cheech"}, {"response": 13, "author": "terry", "date": "Mon, Jun  2, 1997 (11:27)", "body": "I did this: barton# mkdir /home/var ; ln -s /home/var /var barton# df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/sd0a 9727 5459 3781 59% / /dev/sd0f 705727 77061 593379 11% /home /dev/sd0h 198335 177773 10645 94% /usr /dev/sd0g 63535 57773 2585 96% /var barton# Do I need to reboot for it to take effect now?"}, {"response": 14, "author": "tedchong", "date": "Mon, Jun  2, 1997 (19:33)", "body": "Re: /var on barton Terry, I just did an 'du' on /var at barton and found below directories has eaten the most space: 17818 ./www 60920 ./account 22142 ./log 8218 ./webdocs You don't have to reboot barton. What I found you have not link /var to /home/var, to to this, see below step-by-step: 1. rm -f /home/var 2. mv /var /home/ 3. ln -s /home/var /var This will make a link from /var to /home/var"}, {"response": 15, "author": "terry", "date": "Tue, Jun  3, 1997 (09:20)", "body": "OK I'll try that now. Check and see if this works ok? Let's move this discussion to the BSDI topic ok?"}, {"response": 16, "author": "terry", "date": "Sat, Aug 23, 1997 (03:37)", "body": "Torvalds is now the trademark owner for Linux. http://www.LinuxMall.com/announce/lxtm.001.html"}, {"response": 17, "author": "steven", "date": "Tue, Oct 27, 1998 (08:38)", "body": "Wow, that's a pretty bad deal (the Trademark suit). I guess someone _had_ to try it. so.. what do people like in the way of 'Real Linux Apps' nowadays? Corel's going to give away their suite for Linux soon, I hear. -steven"}, {"response": 18, "author": "terry", "date": "Tue, Oct 27, 1998 (11:14)", "body": "Welcome Steven! I'm partial to BSDI,a s you can tell, although our newest system is running FreeBSD."}, {"response": 19, "author": "terry", "date": "Thu, Nov  5, 1998 (14:08)", "body": "From love@cptech.org Mon Nov 2 18:54:57 1998 Return-Path: Date: Mon, 2 Nov 1998 18:54:57 -0500 (EST) Errors-To: info-policy-notes-owner@essential.org Reply-To: love@cptech.org Originator: info-policy-notes@essential.org Sender: info-policy-notes@essential.org From: James Love To: Multiple recipients of list INFO-POLICY-NOTES Subject: the Halloween Document ------------------------------------------------------------ Info-Policy-Notes | News from Consumer Project on Technology ------------------------------------------------------------ November 2, 1998 The Halloween Document Microsoft has confirmed that this internal document, which was leaked to Eric Raymond, is authentic. It is the Microsoft strategy to deal with Linux and other free software platforms, referred to as \"Open Source Software\" or OSS by the MS author. Eric Raymond has placed an annotated version of the document on the web at: http://www.tuxedo.org/~esr/halloween.html The memorandum offers important insight into Microsoft's understanding of the free/open source software movement. It indicates, for example, that Microsoft needs to attack the process and the culture of the free software movement, more than any particular company. Eric Raymond sees awareness by Microsoft that the Internet Engineering Task Force (IETF) and its support of open software is a threat to Microsoft's goal of dominating server markets. These are the excerpts from the document that Eric placed in his introduction. Jamie Love 202.387.8030 * OSS poses a direct, short-term revenue and platform threat to Microsoft, particularly in server space. Additionally, the intrinsic parallelism and free idea exchange in OSS has benefits that are not replicable with our current licensing model and therefore present a long term developer mindshare threat. * Recent case studies (the Internet) provide very dramatic evidence ... that commercial quality can be achieved / exceeded by OSS projects. * ...to understand how to compete against OSS, we must target a process rather than a company. * OSS is long-term credible ... FUD tactics can not be used to combat it. * Linux and other OSS advocates are making a progressively more credible argument that OSS software is at least as robust -- if not more -- than commercial alternatives. The Internet provides an ideal, high-visibility showcase for the OSS world. * Linux has been deployed in mission critical, commercial environments with an excellent pool of public testimonials. ... Linux outperforms many other UNIXes ... Linux is on track to eventually own the x86 UNIX market ... * Linux can win as long as services / protocols are commodities. * OSS projects have been able to gain a foothold in many server applications because of the wide utility of highly commoditized, simple protocols. By extending these protocols and developing new protocols, we can deny OSS projects entry into the market. * The ability of the OSS process to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing. More importantly, OSS evangelization scales with the size of the Internet much faster than our own evangelization efforts appear to scale. ------------------------------------------------------------- INFORMATION POLICY NOTES: the Consumer Project on Technology http://www.cptech.org , 202.387.8030, fax 202.234.5127. Archives of Info-Policy-Notes are available from http://www.essential.org/listproc/info-policy-notes/ Subscription requests to listproc@cptech.org with the message: subscribe info-policy-notes Jane Doe To be removed from the list, the message should read, unsub info-policy-notes -------------------------------------------------------------"}, {"response": 20, "author": "CotC", "date": "Fri, Nov  6, 1998 (10:24)", "body": "Interesting..."}, {"response": 21, "author": "CotC", "date": "Fri, Nov  6, 1998 (10:32)", "body": "Time to drift some more. Linux is probably my favorite UNIX. It's easy to set up and administer, it's free, and it'll run on my cheapo Intel hardware at home (which is what keeps AIX from being my favorite UNIX, by the way). Solaris x86 was also cheap ($18, including postage and handling) but if you don't like the fact that there aren't a whole lot of commercial applications for Linux, you're _really_ going to hate Solaris x86 (at least if you want to be able to _afford_ the commercial apps :-}). There's lso a really limited selection of drivers, and just try and find any useful information for the Intel version on Sun's website... Oh, yeah, it doesn't get along real well with Linux, either. Linux Swap Space and Solaris Native look the same to System Commander, Partition Magic, and a couple different flavors of fdisk. OK. I feel better now..."}, {"response": 22, "author": "terry", "date": "Tue, Nov 10, 1998 (07:40)", "body": "Technology News Microsoft Saw Linux As Copyright Threat (11/09/98 5:16 p.m. ET) By Andy Patrizio, TechWeb A recently leaked internal Microsoft memo outlining the threat posed by Linux showed that the company has considered taking legal action against the free operating system. \"Halloween II\" was the second of three memos written by Vinod Valloppillil, a program manager for Microsoft Proxy Server, describing how Linux could hurt demand for Windows NT, particularly in the server market. In a section titled \"Process Vulnerabilities,\" Valloppillil wrote that Linux will \"cream skim\" NT Server's best features. He added, \"The Linux community is very willing to copy features from other OSes if it will serve their needs. Consequently, there is the very real long-term threat that as MS expends the development dollars to create a bevy of new features in NT, Linux will simply cherry pick the best features and incorporate them into their codebase.\" Valloppillil concluded: \"The effect of patents and copyright in combating Linux remains to be investigated.\" Linux backers say borrowing an innovative idea in the software industry is something everyone -- including Microsoft -- has done. \"For Microsoft to accuse someone of stealing ideas is a little like the pot calling the kettle black,\" said Bob Young, CEO of Red Hat Software, a Linux vendor. The statement may reflect the author's lack of knowledge, rather than company policy, Young said. Earlier this week, Microsoft admitted the document was genuine, but said it did not plan to act on any of its recommendations. \"To bluntly make a statement like that, that Microsoft is the innovator and other people are creaming off them, is a little na\ufffdve,\" said John \"maddog\" Hall, executive director of Linux International, a non-profit group. For a company the size and strength of Microsoft to wield the law against a free OS, developed largely by college students and programmers working in their spare time, could also be a public-relations disaster for Microsoft. \"I don't think they're that foolish, frankly,\" said Jerry Davis, founding partner with Davis & Schroeder, an intellectual-property law firm in Monterey, Calif. Davis is counsel for Linux International and Linus Torvalds, the creator of Linux. \"Suing the Linux community would do them so much ill will.\" According to Davis, although a copyright gives the owner exclusive rights to the way in which an idea is expressed, like source code, a fundamental concept of intellectual-property law is that no one owns an idea. And Microsoft has borrowed plenty of ideas in the past. \"Microsoft has shamelessly done that with respect to the Apple OS,\" said Davis. \"They also did it to Digital Research from a product that predated Windows called GEM, and they've done it repeatedly with application software.\" A Microsoft spokesman was unavailable for comment. The company has responded with a rebuttal document in which it acknowledges that the documents are real, but reflect the opinion of one engineer at the company, and not something being used to drive Microsoft policy."}, {"response": 23, "author": "tami", "date": "Mon, Nov 23, 1998 (12:16)", "body": "Linus is responsible for the linux kernal, but not for the entire OS. Most olinux systems contain GNU software developed by the Free Software Foundation (Project GNU).GCC, GDB, emacs - all were developed by Richard Stallman who put them under the GNU General Public License so we could all benefit. Gates has o has used GNU/Linux wants to downgrade to any flavor of microsoft. For a better look at GNU software, visit www.gnu.org."}, {"response": 24, "author": "terry", "date": "Mon, Jan 25, 1999 (10:58)", "body": "Linux Users Demand Refund Not from Red hat, from Microsoft. Instead of automatically clicking on the \"I Agree\" button that says they capitulate to every demand MS makes on them as a condition of using Windows, some have decided to follow the instructions to \"contact the manufacturer for instructions on return of the unused product(s) for a refund\". Although Microsoft wrote the language of the agreement, MS spokesman Tom Pilla says as far as his company is concerned buying the computer with Windows pre-loaded constitutes an agreement to use it and disqualifies users from a refund; another PR triumph for Redmond's Goliath seems to be in the making. [irony alert] More details said to be available at"}, {"response": 25, "author": "CotC", "date": "Mon, Jan 25, 1999 (11:12)", "body": "Interesting. Please let us know more when you have it..."}, {"response": 26, "author": "KitchenManager", "date": "Mon, Jan 25, 1999 (22:27)", "body": "too cool, huh, Tommy?"}, {"response": 27, "author": "mikeg", "date": "Sat, Apr 17, 1999 (07:03)", "body": "Mmm...I've now installed RedHat 5.2 linux and it's LUVVVEELLLLY.....no more windows crashes for me! Oh, except when I want to listen to some RealAudio. Or print something out. Or use ICQ. :-)) Such is life with minority Operating Systems :)"}, {"response": 28, "author": "terry", "date": "Sun, Apr 18, 1999 (16:47)", "body": "Have you read Neal Stephenson's essay on operating systems?"}, {"response": 29, "author": "mikeg", "date": "Mon, Apr 19, 1999 (15:40)", "body": "nope. where can i find it?"}, {"response": 30, "author": "terry", "date": "Wed, Oct 13, 1999 (09:39)", "body": "VA Linux, SGI and O'Reilly (how's about that for a trifecta!) are getting behind Debian Linux with a big co-marketing deal (including a Star Office tie-in). http://www.nytimes.com/library/tech/99/10/biztech/articles/12linux.html"}, {"response": 31, "author": "terry", "date": "Sun, Oct 31, 1999 (11:40)", "body": "I'll be watching this in the next couple of days: http://webevents.broadcast.com/ibm/pwd102799/index.tl?loc=34 unix conference Main Menu"}]}, {"num": 20, "subject": "global search and replace methods", "response_count": 23, "posts": [{"response": 1, "author": "sprin5", "date": "Thu, Apr 27, 2000 (21:40)", "body": "I needed to do some global searching and replacing tonight so I created the following one line perl script: perl -pi~ -e \"s/Sorry, we are still under construction!/Bob and Paul are working on this site/g;\" `find . -name \"*.htm\"` I named it replace.pl and performed a chmod+x on it and voila. It worked like a champ. That's one way to do it that worked for me."}, {"response": 2, "author": "MarciaH", "date": "Thu, Apr 27, 2000 (22:04)", "body": "Fantastic. Next time you'd better post a translation with your comments. Wish I knew more about it, but there are just so many hours and all that..."}, {"response": 3, "author": "sprin5", "date": "Fri, Apr 28, 2000 (08:57)", "body": "I forgot to say I put it in the virtual_html directory."}, {"response": 4, "author": "terry", "date": "Mon, Jul 16, 2001 (10:36)", "body": "find . -type f | xargs grep -i hotjava will find all instances of the word hotjava on the whole system"}, {"response": 5, "author": "terry", "date": "Thu, Aug  9, 2001 (09:47)", "body": "Search/Replace in many Files example of how to run a search and replace through many files in UNIX. Comes in handy for situations like when Netscape Composer changes all the links to absolute rather than relative. from the unix command prompt: type foreach file (*.html) where *.html is the search pattern there will be a new prompt. Type: cp $file $file.orig to backup the files mv $file xx which moves the old files into a 'temp' file sed '1,$s/search/replace/g' xx > $file where search and replace are your strings. Note: Special characters such as / should be preceded by a \\ end Once you type 'end', it will execute these commands."}, {"response": 6, "author": "terry", "date": "Tue, Sep 18, 2001 (09:51)", "body": "Finding them all can be done in many, many ways, but here is one way, to seach every regular file on the machine... 1. become root 2. type: find / -type f | xargs grep -l localhost | Mail root &"}, {"response": 7, "author": "terry", "date": "Sat, Apr  6, 2002 (18:51)", "body": "Perl offers a solution that reduces it from a three-step process (change/diff/move) to one: perl -i -wpe 's/$INPUT_TXT/$OUTPUT_TXT/g' $file"}, {"response": 8, "author": "terry", "date": "Thu, Sep  4, 2003 (13:30)", "body": "I need to rename a whole directory worth of files from one extension to another. Is there a way to combine one of the above tricks with xargs or something to do that easily? Use a bourne type shell: for file in ; do mv $file ${file%.old}.new done in csh/tcsh, I think it would be something like this: foreach file ( pattern ) mv $file !#:1:r.new end In either case, watch out for the glob matching leading pathnames. Another method: for i in `ls *.foo` do mv $i `basename $i .foo`.bar done"}, {"response": 9, "author": "terry", "date": "Thu, Oct  2, 2003 (23:17)", "body": "jkcunningham Is there a relatively easy way to search for a string recursively throughout a directory tree and replace it with another string? It could easily be restricted to files of a certain extension. Thanks. neo77777 Look at perl scripts, it is a powerfull text editing tool as well Mik I think the easiest way would be to create one simple script to replace a string. Something like: [CODE] #!/bin/bash if [ $# -lt 3 ] then echo \"usage: replace \" exit fi sed -e \"s/$2/$3/g\" $1 > $1.~bak mv $1.~bak $1 [/CODE] And then run one command to search through a directory and replace strings using the script above. Something like: find . -name \"*.txt\" -exec \\replace {} \"some string\" \"something\" \\; It's just a quick script I put together so I can't guarantee that it will work in all cases. But it should work for most files. If it's important data you are gonna be running it on you might want to change the mv into a cp to make sure you still have the backup file in case it goes wrong. unSpawn I use [url=\" http://www.laffeycomputer.com/rpl.html\"]rpl[/url] , easy and safe (simulation mode). jkcunningham Thanks. I'll try them both. source http://www.linuxquestions.org/questions/archive/1/2002/07/4/26349"}, {"response": 10, "author": "terry", "date": "Mon, Mar  8, 2004 (13:03)", "body": "find ./ -type f -name \"foo*\" -print | sed 's:\\(.*\\)/foo\\(.*\\):mv \"&\" \"\\1\\/bar\\2\":' |sh is another way"}, {"response": 11, "author": "terry", "date": "Mon, Apr 19, 2004 (19:00)", "body": "Unix Tip 2002-04-24 16:02:31 How to do a global search and replace in unix for i in `egrep -lR \"spurious dipthong\" .`; do perl -i -pe \"s/spurious dipthong/non-spurious dipthong/g\" $i ; done The above will find \"spurious dipthong\" and replace it with \"non-spurious dipthong\". What's nice about this, is it's fast, and you can use any crazy ass perl regex you want. Also, you can search for files that contain \"jay and silent bob\" and then replace all occurances of \"bitch\" with \"boo-boo-kitty-fuck\", so as you can see it's pretty versatile. How it works It's a normal shell for loop. The for loop is recieving a list (via the magic back-tics `) from egrep... `egrep -lR \"Lisette\" .` That -l means, only return file names (and paths). The -R means, be recursive. And of course the period at the end means, start looking from this location. You can replace the period with a path (I think). perl -i -pe \"s/is (cool|awesome)/is super $1/ig\" $i Each matching filename is put in $i (one at a time) and passed to perl, which is in 'in place editing mode' with the -i flag. Then, perl does it's s///g magic on the file. Think of the fun. Don't forget to back stuff up before doing global search and replaces!"}, {"response": 12, "author": "terry", "date": "Mon, Apr 19, 2004 (19:14)", "body": "Since it's damn near impossible to find online the simplest way to scan a Unix directory of files, search for one text pattern, and replace with another, I am now archiving the simplest method I could find (which I've tested and have proven that it works beautifully). Simply cd to the directory where your files live, modify (or leave) the *.php to match the file type you are modifying, then run the following at the command line: for fl in *.php; do mv $fl $fl.old sed 's/FINDSTRING/REPLACESTRING/g' $fl.old > $fl #rm -f $fl.old done Uncomment rm -f $fl.old if you don't want to bother keeping a copy of the old files. Simple, eh? It's all about sed, baby."}, {"response": 13, "author": "terry", "date": "Mon, Apr 19, 2004 (19:30)", "body": "#!/bin/sh if [ $# -lt 3 ] ; then echo -e \"Wrong number of parameters.\" echo -e \"Usage:\" echo -e \" renall 'filepat' findstring replacestring\\n\" exit 1 fi #echo $1 $2 $3 for i in `find . -name \"$1\" -exec grep -l \"$2\" {} \\;` do mv \"$i\" \"$i.sedsave\" sed \"s/$2/$3/g\" \"$i.sedsave\" > \"$i\" echo $i #rm \"$i.sedsave\" done"}, {"response": 14, "author": "terry", "date": "Mon, Apr 19, 2004 (19:32)", "body": "The code: #!/usr/local/bin/perl # # Usage: rename perlexpr [files] ($regexp = shift @ARGV) || die \"Usage: rename perlexpr [filenames]\\n\"; if (!@ARGV) { @ARGV = ; chomp(@ARGV); } foreach $_ (@ARGV) { $old_name = $_; eval $regexp; die $@ if $@; rename($old_name, $_) unless $old_name eq $_; } exit(0); The Explanation Save the above code into a file called rename. Make sure that the permissions are set correctly so that you can execute the script. Also check to make sure that your Perl interpreter is in /usr/local/bin. If Perl is somewhere else, you'll need to change the first line to point to where Perl is installed on your system. To use the script you use: rename perlexpr [files] where perlexpr is the substitution operator, i.e., s///. You can actually pass any Perl expression through to perlexpr allowing you to do more complex file renaming actions. The files argument is a list of filenames that you want to change. You can leave the files argument out and the script will take a list of names from STDIN. The Examples Make all the files in the directory end with .html instead of .txt. rename 's/txt$/html/' * Change all the files prefixed with the text mah and suffixed with .new to be suffixed with .old instead. rename 's/new$/old/' mah*.new Hide every file in the directory by prefixing the filename with a . rename 's/(.+)/\\.$1/' * The possibilities are endless. You should be careful however as you are dealing with regular expressions. You should be as specific as possible when specifying your patterns otherwise you may rename a file in a way that you had not anticipated. For instance, take the first example. If you had typed: rename 's/txt/html/' * (notice the missing $ in the pattern?) and you had a file named newtxt.txt, the script would rename the file to newhtml.txt which might not have been what you wanted. Hopefully this script will be useful to you. If you have any problems or questions, you can e-mail them to me at dmah@vox.org"}, {"response": 15, "author": "terry", "date": "Mon, Apr 19, 2004 (19:49)", "body": "find . -name index.shtml -exec perl -pi.bak -e \"s/string1/string2/g\" {} \\;"}, {"response": 16, "author": "terry", "date": "Tue, Apr 20, 2004 (23:07)", "body": "Here's a *file* renaming utility I got from Jeff Monks. for x in `find . -name temp_index.htm` do dir=`dirname $x` mv $x $dir/index.html done Need to test it."}, {"response": 17, "author": "terry", "date": "Tue, Jan 31, 2006 (20:55)", "body": "Want to use sed(1) to edit a file in place? Well, to replace every 'e' with an 'o', in a file named 'foo', you can do: sed -i.bak s/e/o/g foo And you'll get a backup of the original in a file named 'foo.bak', but if you want no backup: sed -i '' s/e/o/g foo"}, {"response": 18, "author": "terry", "date": "Wed, Mar  8, 2006 (08:18)", "body": "http://www.uwo.ca/its/doc/hdi/web/treesed.html#replacing treesed How to Use Treesed First you log in to panther.uwo.ca, and go to the directory where you want to search or make changes. There are two choices you can make when using treesed: 1. Do I just want to search for a text, or do I want to search for a text and replace it with something else? If you are just searching you are using Treesed in \"search mode\", otherwise it is in \"replace mode.\" 2. Do I want to search/replace only in files in my current directory, or should files in all subdirectories (and all directories below that) also be done? Some examples will make this clear. Searching Say you are faced with the situation that the author of a slew of web-pages, Nathan Brazil, has left and has been succeeded by Mavra Chang. First, let us see which files are affected by this (what you type in is shown in bold): [10:52am panther] treesed \"Nathan Brazil\" -files *.html search_pattern: Nathan\\ Brazil replacement_pattern: ** Search mode . midnight.html: 1 lines on: 2 .. well.html: 1 lines on: 3 We notice the following: * The search text \"Nathan Brazil\" is enclosed in double-quotes (\"). * You specify which files to search with -files followed by a list of file names--in this case *.html. * Treesed reports the search pattern (\"pattern\" is just a fancy word for \"text\") you specified (you can ignore that \\). * Treesed reports an empty replacement_pattern. This is correct, because you haven't entered one. * It therefore deduces that is is in search mode. * It finds two files containing \"Nathan Brazil\", and reports on which lines of these files it found it; it does not show the lines themselves. Because you used -files, Treesed will search in the files you specify in the current directory. You can also search files in the current directory and all directories below it. However, in that case you can not specify which file names to use, all files will be searched: [11:02am panther] treesed \"Nathan Brazil\" -tree search_pattern: Nathan\\ Brazil replacement_pattern: ** Search mode . midnight.html: 1 lines on: 2 ... well.html: 1 lines on: 3 . new/echoes.html: 1 lines on: 2 We notice the following: * Instead of -files we now see -tree. * We do not see a specification of file names. * Treesed finds an occurence of \"Nathan Brazil\" in the file echoes.html in the subdirectory new; it did not find this file in the previous example (as it shouldn't). Replacing To replace a text you simply add the replacement text right after the search text: [11:17am panther] treesed \"Nathan Brazil\" \"Mavra Change\" -files *.html search_pattern: Nathan\\ Brazil replacement_pattern: Mavra Chang ** EDIT MODE! . midnight.html: 1 lines on: 2 Replaced Nathan\\ Brazil by Mavra Chang on 1 lines in midnight.html .. well.html: 1 lines on: 3 Replaced Nathan\\ Brazil by Mavra Chang on 1 lines in well.html We notice the following: * Right after the search text \"Nathan Brazil\" you specify the replacement text \"Mavra Chang\". * As a result, Treesed now reports a non-empty replacement_pattern. * Hence it concludes it is in \"edit mode\", which means replacment mode. * Treesed dutifully reports on which lines in which files it did the replacement. To replace a text in all files in the current directory and the ones below it, we do the following: [11:17am panther] treesed \"Nathan Brazil\" \"Mavra Chang\" -tree search_pattern: Nathan\\ Brazil replacement_pattern: Mavra Chang ** EDIT MODE! . midnight.html: 1 lines on: 2 Replaced Nathan\\ Brazil by Mavra Chang on 1 lines in midnight.html .... well.html: 1 lines on: 3 Replaced Nathan\\ Brazil by Mavra Chang on 1 lines in well.html . new/echoes.html: 1 lines on: 2 Replaced Nathan\\ Brazil by Mavra Chang on 1 lines in new/echoes.html and we get the expected results, including the replace in new/echoes.html. Old Versions Treesed leaves behind quite a mess of old versions of the files it changed (only in change-mode, of course). These old files have the same name as the original file, with .ddddd appended to it. For example, if treesed makes a change to midnight.html it will leave the original version as something like midnight.html.26299. You'll have to remove these files lest your disk area clutters up. Here is a command that does that, but beware! This command removes all files in the current directory and all below it, that end in a period followed by one or more digits: find . -name \"*.[0-9]*\" -exec rm {} \\; It is interesting to note that if you use treesed again without cleaning up, you may get files like midnight.html.26299.27654. These will also be cleaned up by the above slightly dangerous command. About Treesed treesed is public domain software developed and designed by Rick Jansen from Sara, Amsterdam, Netherlands, January 1996."}, {"response": 19, "author": "terry", "date": "Wed, Mar  8, 2006 (08:28)", "body": "download treesed http://fresh.t-systems-sfr.com/cgi-bin/warex?unix/src/misc/treesed.Z http://fresh.t-systems-sfr.com/cgi-bin/warex?unix/src/misc/treesed.gz http://fresh.t-systems-sfr.com/cgi-bin/warex?unix/src/misc/treesed.bz2 http://fresh.t-systems-sfr.com/cgi-bin/warex?unix/src/misc/treesed.zip http://fresh.t-systems-sfr.com/unix/src/misc/.warix/treesed.html"}, {"response": 20, "author": "terry", "date": "Wed, Mar  8, 2006 (09:14)", "body": "http://www.webmasterworld.com/forum46/495-1-10.htm has some options"}, {"response": 21, "author": "terry", "date": "Wed, Mar  8, 2006 (09:49)", "body": "http://www.laffeycomputer.com/rpl.html rpl - Replace Strings - from Laffey Computer Imaging Price: $0 (Copyrighted FreeWare) Current Version: 1.4.0 Date Modified: July 22, 2002 Featured as Tool of the Month on UnixReview! Overview rpl is a UN*X text replacement utility. It will replace strings with new strings in multiple text files. It can work recursively over directories and supports limiting the search to specific file suffixes. rpl [-iwRspfdtx [-q|-v]] Details rpl replaces old_str with new_str in all target files. It returns the number of strings replaced or a system error code (non-zero) if there is an error. Note that you should put strings in single quotes if they contain spaces. You must also escape all shell meta-characters. It's a good idea to put ALL strings in single quotes. If one of the strings starts with a \"-\" you need put \"--\" as the last argument BEFORE the string. This will prevent the options parser from treating the string as a command- line option. For Example: rpl -i -- '-8x' '+8x' myfile which would replace occurences of \"-8x\" with \"+8x\" in the file myfile (ignoring case). A period will be printed to stderr as each target file is processed to give you feedback on the replacement progress. You may use the quiet (-q) option to suppress all output but major error reporting. rpl will attempt to maintain the owner, group and permissions of your original files. For safety, rpl creates a temporary file and makes changes to that file. It then moves the temporary file over the original file. rpl sets the owner, group, and permissions of the new file to match those of the original file. In some circumstances rpl will not be able to do this (such as when a file is owned by the superuser but you have group write permission). In these cases rpl will warn you that the owner/group or permissions cannot be set and that file will be skipped, unless you use the force (-f) option. Note that the use of temp files in predictable, world-writeable locations could lead to symlink attacks. Ideally you should set the $TMPDIR environment variable to a private directory readable and writeable only by you. This is especially important if running rpl as root. You have been warned! rpl can be placed in silumation mode (-s), in which rpl will print a list of files that would be modified if an actual replace operation were executed. This is useful when you are about to make changes to a larger group of files, possibly in many directories. rpl can be placed into prompt mode (-p). In this mode rpl will examine each file, printing a period as each file is scanned. If a match is found rpl will prompt you to save the replacements made to that file. Answering \"y\", or pressing Return will save the changes. Answering \"n\" will leave that file untouched. rpl will then move on to the remaining target files. Note that you will only be prompted for files which had a match. If no match is found a period is printed to give you an indication that rpl is working. (This is useful when, for instance, you are performing a large recursive batch replacement on a collection of files.) Normally, rpl will change the modification time of all files it processes like any other program. However, you may instruct rpl to keep the original modification times using the -d (Don't alter mod-times) option. You can specify file suffixes to be searched using the -x option. Any files that do not match the specified suffixes will not be searched or modified. The -x option may be used more than once to tell rpl to search files with varying suffixes. For instance, say you wanted to search all of your \".html\", \".htm\", and \".php\" files you would add \" -x'.html' -x'.htm' -x'.php' \" to your command line. rpl would then skip any files that did not end with these suffixes. This is mainly useful when doing recursive searching (-R option). OPTIONS -i Ignore case of old_str rpl will match the old_str in the searched file regardless of the case. The case of new_str will not be altered. -w Whole words (old_str bounded by white space in file) rpl will only match old_str if it is bounded by the start of a line, a space, a tab, or the end of a line. -q Quiet mode (no output at all) Good for shell scripts, etc. -v Verbose mode (lots of output) rpl will list the name of each file and directory, and the line numbers that contain matches. -R Search directories recursively rpl will scan every file and every directory recur- sively. Without this option directories will be skipped. -x Specify file suffixes to search. (e.g. \".html\", \".c\", etc.) May be used multiple times. See above for details. -p Prompt for each file rpl will prompt you before scanning each file. If you respond 'N' or 'n' rpl will skip that file and move on to the next file. The default action if you press enter is to process the file. -s Simulation mode rpl will scan all of the files and list the names of files that it would modify if a replace opera- tion was executed. If you turn on the verbose (-v) option "}, {"response": 22, "author": "terry", "date": "Wed, Mar  8, 2006 (10:02)", "body": "Tool of the Month: rpl by Joe \"Zonker\" Brockmeier This month, I'll introduce a tool that is handy for admins, programmers, and anybody who works with text files on a regular basis. The utility is rpl, short for \"replace strings\", which is exactly what it does. rpl is a simple utility that searches files for a text string and replaces that text string with another that you specify. Replace Strings with rpl It's possible to replace text strings in multiple files with numerous *nix utilities, but that involves getting to know a programming language or the arcane syntax of sed, awk, or some other program. While I'm a big fan of sed and Perl, for example, there's also something to be said for a utility that allows users to become productive in a matter of minutes rather than learning a programming language. That's where rpl comes in. Is it as powerful as sed or Perl? Nope. But it is a quick and easy way to make changes in text files \ufffd and it shouldn't take more than a few minutes to learn. The basic syntax of rpl is rpl 'oldtext' 'newtext' filename. It doesn't really get much simpler than that, now does it? Note that strings should be placed inside single quotes (') so that the shell doesn't try to treat part of your text string as a special character. If you're replacing a single word with another single word (in other words, no white space) then it's not mandatory to place your strings inside single quotes \ufffd but it's a good habit to get into. There are also several options that may be of interest when using rpl. Let's say you want to replace all instances of the string \"Copyright 2003-2004\" with \"Copyright 2003-2005\" in all files with the extensions \".php,\" \".php3\" or \".html\" in the directory public_html and all of its subdirectories. It's a simple task using rpl: rpl -R -x .php -x .html -x .php3 'Copyright 2003-2004' 'Copyright 2003-2005' * The -R option tells rpl to look for the term recursively. Note that the -x option is used multiple times rather than using the option once and specifying several extensions afterwards. Even though you're passing the \"*\" wildcard to the shell, rpl will only work on files with one of the extensions specified. Very often, it's necessary to replace a string that may also be present within a larger string. For example, if you wanted to replace the word \"write\" in a set of files with another string, you might not want to insert the string into words like \"rewrite,\" \"written,\" and so forth. To tell rpl to ignore a string that is bounded by whitespace, use the -w option: rpl -w 'write' 'replace' * What if you're not sure which files contain a term? The -p option will cause rpl to prompt you for each file that will be changed. Note that rpl will not prompt for each change, but only for each file that will be changed. A file might have only one change, or several hundred. To find out ahead of time what files will be changed, use the -s \"simulation\" option. This will cause rpl to search out files that will be changed, and provide a list of files that will be changed, but no changes will be made on that pass. If you'd like to make changes to files without changing their modification time, use the -d option. Like most *nix utilities, rpl is case-sensitive by default. If you'd like to match instances of a string regardless of case, use the -i option. When using rpl -i, specifying \"abc\" as the old string will match \"abc,\" \"ABC,\" \"aBc\", and so forth. This can be particularly handy when replacing filenames in HTML files produced by users working on operating systems that are not case-sensitive. There are a few other useful options with rpl. Be sure to check the man page for rpl, and test it out a bit. It's not quite as powerful as using sed or Perl, but it's a nice tool when you're doing simple search and replace operations. Getting rpl The rpl utility is a freebie from Laffey Computer Imaging. Source and binaries are available from the site. For Debian users, rpl is just an apt-get away. from http://www.unixreview.com/documents/s=8989/ur0407h/"}, {"response": 23, "author": "terry", "date": "Wed, Mar  8, 2006 (10:35)", "body": "So far, good. rpl seems to be the answer I've been looking for to do global search and replace on unix files. I just replaced bank@spring.net with banking@wholetech.com for the address for donations to the Spring. The jury's still out. Let's see if this works. unix conference Main Menu"}]}, {"num": 21, "subject": "FreeBSD", "response_count": 6, "posts": [{"response": 1, "author": "terry", "date": "Tue, Jun 12, 2001 (11:07)", "body": "Here's how you create the install floppies from the 4.3 CD: For a normal CDROM or network installation, all you need to copy onto actual floppies from this directory are the kern.flp and mfsroot.flp images (for 1.44MB floppies). Get two blank, freshly formatted floppies and image copy kern.flp onto one and mfsroot.flp onto the other. These images are NOT DOS files! You cannot simply copy them to a DOS or UFS floppy as regular files, you need to \"image\" copy them to the floppy with fdimage.exe under DOS (see the tools/ directory on your CDROM or FreeBSD FTP mirror) or the `dd' command in UNIX. For example: To create the kern floppy image from DOS, you'd do something like this: C> fdimage kern.flp a: or d:\\tools>fdimage -v -f 1.44M d:\\floppies\\mfsroot.flp a: Assuming that you'd copied fdimage.exe and kern.flp into a directory somewhere. You would do the same for mfsroot.flp, of course. If you're creating the boot floppy from a UNIX machine, you may find that one of the following: dd if=floppies/kern.flp of=/dev/fd0 dd if=floppies/kern.flp of=/dev/rfd0 dd if=floppies/kern.flp of=/dev/floppy work well, depending on your hardware and operating system environment (different versions of UNIX have totally different names for the floppy drive - neat, huh? :-). Going to two installation boot floppies is a step we definitely would have rather avoided but we simply no longer could due to general code bloat and FreeBSD's many new device drivers in GENERIC. One positive side-effect of this new organizational scheme, however, is that it also allows one to easily make one's own kern or MFS floppies should a need to customize some aspect of the installation process or use a custom kernel for an otherwise unsupported piece of hardware arise. As long as the kernel is compiled with ``options MFS'' and ``options MFS_ROOT'', it will properly look for and boot an mfsroot.flp image in memory when run (see how the /boot/loader.rc file in kern.flp does its thing). The mfsroot.flp image is also just a gzip'd filesystem image which is used as root, something which can be made rather easily using vnconfig(8). If none of that makes any sense to you then don't worry about it - just use the kern.flp and mfsroot.flp images as described above. FDIMAGE - Write disk image to floppy disk Version 1.5 Copyright (c) 1996-7 Robert Nordier Usage: fdimage [-dqsv] [-f size] [-r count] file drive -d Debug mode -f size Specify the floppy disk format by capacity, eg: 160K, 180K, 320K, 360K, 720K, 1.2M, 1.44M, 2.88M -q Quick mode: don't format the disk -r count Retry count for format/write operations -s Single-sector I/O -v Verbose"}, {"response": 2, "author": "terry", "date": "Sun, Dec 23, 2001 (13:13)", "body": "quick and dirty way that works on the FreeBSD boxes i have access to : ascending : ls -alt | sort +4 descending: ls -alt | sort +4 -r"}, {"response": 3, "author": "terry", "date": "Tue, Sep 10, 2002 (21:42)", "body": "*** Install FreeBSD on Promise FastTrak *** 1. Dump the image file \"kern.flp\" to a floppy disk. Mark this disk as \"A\". e.g. dd if=kern.flp of=/dev/rfd0 2. Dump the image file \"mfsroot.flp\" to a floppy disk. Mark this disk as \"B\". e.g. dd if=mfsroot.flp of=/dev/rfd0 3. Insert disk \"A\" in to floppy drive, insert FreeBSD CD into CDROM drive. Boot machine. Please make sure that floppy disk boot first. 4. When system prompt \"Please insert MFS root floppy and press Enter\", insert disk \"B\" and press \"Enter\". 5. When the machine beep and prompt \"Boot [kernel] in X seconds...\", press any key (except \"Enter\"). 6. Type \"load ft.ko\" to load the module. Type \"boot\" to go on booting. 7. Start to install FreeBSD. The system may give a warning that the version of FreeBSD CD does not match the version of floppy and ask if you want to retry any way, doesn't matter, press \"Enter\" to go on installation. 8. When the installation finished, DO NOT reboot the machine. Press Alt-F4 switch to console mode. 9. Copy the file \"ft.ko\" from disk \"B\" to \"/modules\" (please mount first). Change directory to the directory which \"kp\" resides (on floppy B) and type \"./kp\" to patch the kernel files. 10. Unmount disk \"B\", insert disk \"A\", then mount it. Copy \"kernel.gz\" from disk \"A\" to \"/\". Type \"gunzip kernel.gz\" to unzip the kernel. Unmount disk \"A\". 11. Reboot system. 12. Recompile the FreeBSD kernel. 13. Reboot. DONE!"}, {"response": 4, "author": "terry", "date": "Fri, Sep 13, 2002 (12:33)", "body": "Sounds great, but it didn't work!"}, {"response": 5, "author": "terry", "date": "Fri, Nov  1, 2002 (20:21)", "body": "Adding a disk with FreeBSD: 12.3.2 Using Command Line Utilities 12.3.2.1 Using Slices This setup will allow your disk to work correctly with other operating systems that might be installed on your computer and will not confuse other operating systems' fdisk utilities. It is recommended to use this method for new disk installs. Only use dedicated mode if you have a good reason to do so! # dd if=/dev/zero of=/dev/rda1 bs=1k count=1 # fdisk -BI da1 #Initialize your new disk # disklabel -B -w -r da1s1 auto #Label it. # disklabel -e da1s1 # Edit the disklabel just created and add any partitions. # mkdir -p /1 # newfs /dev/da1s1e # Repeat this for every partition you created. # mount -t ufs /dev/da1s1e /1 # Mount the partition(s) # vi /etc/fstab # Add the appropriate entry/entries to your /etc/fstab. If you have an IDE disk, substitute ad for da. On pre-4.X systems use wd. 12.3.2.2 Dedicated Adding A Disk to FreeBSD This bit documents the process of adding a new disk to a FreeBSD system. Let's say, for example, that we have a server which has one system disk that contains some four partitions. However, the /usr directory is nearly full, so we have decided offload /usr/home onto a new disk. The current disk is on primary/master on the first IDE chain. It's device id is /dev/ad0. We are adding a drive to the slave possition of that same chain. It is /dev/ad1. Make sure you have the right device before you issue these commands. Don't want to be wrong! fdisk -BI ad1 disklabel -w -B ad1s1 auto disklabel -e ad1s1 newfs /dev/ad1s1c Now we mount the new disk under a temp directory to copy the data over. mount /dev/sd1s1c /mnt cp -Rpv /usr/home /mnt When it's done, check the data, make sure you got everything and that there were no errors. Then unmount the drive from it's temporary place (/mnt). And clear out /usr/home to get the space back on that device. umount /mnt rm -rf /usr/home/ mkdir /usr/home Now simply edit /etc/fstab and put/in or edit the entry for /usr/home. Then 'mount /usr/home' and you are done."}, {"response": 6, "author": "terry", "date": "Sun, Aug 24, 2003 (08:31)", "body": "David Chaplin-Loebell (dloebell) Your post inspired me to write out a list of my hard-won FreeBSD knowledge. I'm no expert, but I've had FreeBSD servers for almost three years now and I've learned a few things. Hopefully they're useful for others: FreeBSD (and the ports collection) use the /usr/local tree more consistently than other Unixes I've dealt with. For example: - Config files for locally-installed software live in subdirectories of /usr/local/etc/ -- there's one subdirectory for each package. - Similarly, startup files for locally-installed daemons live in /usr/local/etc/rc.d -- note that many ports will install a \".sample\" file in this directory; only files ending in \".sh\" are actually run at startup. - Docs for locally installed packages go in /usr/local/share/doc The nice thing about all this is you rarely have to mess with /etc, and that's good because /etc files are routinely replaced in system upgrades. (There's a tool called mergemaster that helps deal with this, but it's a pain to use, and it's better to simply minimize modifications of files in /etc). I know I mentioned cvsup and portupgrade earlier, but I'll reiterate: every FreeBSD system needs these two tools. There's a good article on portupgrade here: http://www.onlamp.com/pub/a/bsd/2001/11/29/Big_Scary_Daemons.html CVSUP is a bit harder to find a good explanation for. It's easy once you build a proper config file, but figuring out what to put in that config file the first time can be a bit confusing. I suggest: *default host=cvsup2.FreeBSD.org *default base=/usr *default prefix=/usr *default release=cvs *default tag=RELENG_4_8 *default delete use-rel-suffix src-all ports-all tag=. This says: get the latest sources in the 4.8-RELEASE tag, and the latest ports. I prefer to track 4.x-RELEASE on my machines (I'll move to 5.x-RELEASE when the FreeBSD team declares it \"production\" ready.) Some admins seem to prefer tracking FreeBSD 4-STABLE, but in my mind this changes too often to use on production servers. If you prefer to track -STABLE, simply replace RELENG_4_8 with RELENG_4. Subscribe to the FreeBSD security alerts mailing list at http://lists.freebsd.org/mailman/listinfo/freebsd-security-notifications so you know when you need to upgrade your system. If you read only one section of the FreeBSD manual, read about how to do system upgrades. Basically, you cvsup your sources, do a make buildworld, make buildkernel, make installkernel, reboot, make installworld, mergemaster. But of course there are details, and if you're running an internet server you should know how to do this. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/makeworld.html FreeBSD has many useful options that can be controlled by editing /etc/rc.conf. In particular, this file is used for enabling and disabling built-in subsystems like Sendmail, sshd, nfs, etc. If you deal with perl, do yourself a favor and install the /usr/ports/lang/perl5, then type use.perl port. This means that you will use the version of perl installed from ports, not the version that is part of the FreeBSD system. The version installed with the system is 5.005_03, very outdated, and not easy to upgrade. Do this BEFORE you install any Perl modules from CPAN. When possible, install everything from ports. Don't install things by any other method unless the port doesn't work or is unavailable. (In particular, installing perl modules using the CPAN module seems to get me in trouble whenever I do it. I almost always discover that the module I needed was available in the FreeBSD ports collection after all). On a server, I generally want only the command-line version of a tool, not the X11 version. For example, if you go into the emacs port directory and type \"make install\", it will, by default, build X11 before it builds emacs, and build an emacs binary with lots of X11 stuff in it that you don't need. In most ports, you can disable this behavior by typing make install WITHOUT_X11=1 unix conference Main Menu"}]}, {"num": 22, "subject": "ls", "response_count": 4, "posts": [{"response": 1, "author": "terry", "date": "Thu, Apr 11, 2002 (14:51)", "body": "$ ls -lF | grep / drwxr-xr-x 2 nobody nobody 512 Jan 2 16:16 -ralt/ drwxr-xr-x 4 nobody nobody 512 Dec 20 23:59 Catalog/ drwxr-xr-x 3 nobody nobody 512 Mar 30 22:11 Catalog Templates/ drwxr-xr-x 3 nobody nobody 6656 Mar 26 18:51 DSC/ drwxr-xr-x 3 nobody nobody 512 Dec 20 23:58 _derived/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:01 _fpclass/ drwxr-xr-x 3 nobody nobody 512 Dec 20 23:58 _overlay/ drwx------ 10 nobody nobody 1024 Dec 21 00:00 _private/ drwxr-xr-x 4 nobody nobody 512 Apr 1 09:43 _themes/ drwxrwxr-x 4 nobody nobody 512 Mar 26 06:24 _vti_bin/ drwxrwxr-x 2 nobody nobody 13312 Apr 9 13:27 _vti_cnf/ drwxrwxr-x 2 nobody nobody 512 Dec 21 00:00 _vti_log/ drwxrwxr-x 3 nobody nobody 512 Apr 7 08:01 _vti_pvt/ drwxrwxr-x 3 nobody nobody 512 Dec 21 00:00 _vti_txt/ drwxr-xr-x 3 nobody nobody 1024 Dec 21 00:00 _webtrends/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 act/ drwxr-xr-x 5 nobody nobody 512 Mar 30 22:11 affiliates/ drwxr-xr-x 3 nobody nobody 512 Dec 25 12:06 afghan/ drwxr-xr-x 3 nobody nobody 512 Dec 20 23:58 analog/ drwxr-xr-x 5 nobody nobody 512 Dec 21 00:00 ann/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 annimages/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:21 antonio/ drwxr-xr-x 3 nobody nobody 8192 Dec 21 00:01 apixbc/ drwxr-xr-x 3 nobody nobody 1536 Apr 5 20:21 applewhite/ drwxr-xr-x 2 nobody nobody 512 Dec 3 06:27 applicatns/ drwxr-xr-x 3 nobody nobody 1536 Apr 5 21:59 apps/ drwxr-xr-x 2 nobody nobody 512 Feb 23 2000 archive/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:08 ark/ drwxr-xr-x 2 nobody nobody 512 Jan 15 2001 austen/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:03 austin/ drwxr-xr-x 3 nobody nobody 1024 Dec 21 00:02 barney/ drwxr-xr-x 3 nobody nobody 512 Mar 30 22:11 bastrop/ drwxr-xr-x 6 nobody nobody 512 Mar 30 21:57 bayless/ drwxr-xr-x 11 nobody nobody 1024 Dec 20 23:59 bayou/ drwxr-xr-x 2 nobody nobody 512 Jan 2 13:50 bcpix/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:03 beatles/ drwxr-xr-x 3 nobody nobody 512 Mar 30 21:57 birthdays/ drwxr-xr-x 2 nobody nobody 512 Apr 29 2001 blah/ drwxr-xr-x 3 nobody nobody 512 Apr 5 21:17 bodychoir/ drwxr-xr-x 3 nobody nobody 512 Jan 14 10:58 boomquest/ drwxr-xr-x 3 nobody nobody 512 Apr 7 07:37 bratwood/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 bsdi/ drwxrwxr-x 5 nobody nobody 512 Apr 7 07:40 budapest/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:21 canada/ drwxr-xr-x 5 nobody nobody 512 Dec 21 00:02 capzeyez/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 cedar/ drwxr-xr-x 9 nobody nobody 1024 Dec 21 00:00 cfadm/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:03 cgi-local/ drwxr-xr-x 4 nobody nobody 1024 Apr 5 20:08 chautauqua/ drwxr-xr-x 3 nobody nobody 512 Apr 8 14:27 connie/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:01 contact/ drwxr-xr-x 4 nobody nobody 512 Apr 9 16:11 cottage/ drwxr-xr-x 2 nobody nobody 512 Jan 21 2000 cultures/ drwxr-xr-x 2 nobody nobody 512 Jan 21 2000 diana/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 diary/ drwxr-xr-x 3 nobody nobody 512 Mar 30 21:17 dns/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:00 docs/ drwxr-xr-x 5 nobody nobody 512 Apr 5 20:08 domains/ drwxr-xr-x 5 nobody nobody 512 Dec 21 00:02 donn/ drwxr-xr-x 3 nobody nobody 512 Mar 26 18:50 dotepp/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 drsingha/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:00 dvd/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:51 eff/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:51 ellis/ drwxr-xr-x 2 nobody nobody 512 Aug 18 2001 email/ drwxr-xr-x 3 nobody nobody 1024 Mar 27 16:16 ethernet/ drwxr-xr-x 11 nobody nobody 512 Jan 4 11:06 family/ drwxr-xr-x 3 nobody karenr 512 Apr 9 08:35 fanfic/ drwxr-xr-x 5 nobody nobody 512 Apr 5 20:21 farm/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 favorites/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 fitness/ drwxr-xr-x 2 nobody nobody 512 Jan 21 2000 food/ drwxr-xr-x 3 nobody nobody 512 Mar 30 21:57 french/ drwxr-xr-x 6 geo geo 512 Apr 7 11:39 geo/ drwxr-xr-x 3 nobody nobody 512 Mar 27 16:16 google/ drwxr-xr-x 3 nobody nobody 512 Mar 30 22:11 halloween/ drwxr-xr-x 2 nobody nobody 512 Oct 20 12:37 ham/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:08 hawaii/ drwxr-xr-x 6 nobody nobody 512 Apr 3 06:50 help/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:02 hirshfield/ drwxr-xr-x 3 nobody nobody 512 Mar 31 16:07 home/ drwxr-xr-x 5 nobody nobody 1024 Dec 21 00:00 house/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:08 icq/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:00 images/ drwxr-xr-x 3 nobody nobody 1024 Dec 21 00:02 indexpages/ drwxr-xr-x 4 nobody nobody 512 Dec 21 00:02 jasa/ drwxr-xr-x 3 nobody nobody 512 Mar 30 22:12 john/ drwxr-xr-x 3 nobody nobody 512 Apr 5 20:08 justin/ drwxr-xr-x 19 nobody karenr 512 Apr 9 08:37 karenr/ drwxr-xr-x 3 nobody nobody 512 Mar 26 18:51 keen/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:03 kristen/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:00 lease/ drwxr-xr-x 3 nobody nobody 512 Dec 21 00:03 logos/ drwxr-xr-x 2 nobody nobody 512 Jun 9 2001 logs/ drwxr-xr-x 3 nobody nobody 512 Jan 2 23:17 magicbox/ drwxr-xr-x 3 nobody geo 512 Dec "}, {"response": 2, "author": "terry", "date": "Wed, Sep 25, 2002 (11:13)", "body": "The equivalent of \"dir\" in DOS, just better. ls -ltr will list files with long description, according to timestamp , recursive . In other words list files, newest file last. If files scroll out of buffer, do ls -ltr |more. You can list by filetype, name etc. using wildcards. ls *.html show only files with extension .html If you want to list only directories, not files, do for instance: ls -latr |grep ^d See \"grep\" below. This search term filter out only lines starting with a \"d\" When you list the long format, that can only be directories: drwxrwxr-x 6 dark dark 4096 Apr 29 23:49 .netscape ls -la will show long description of all files, also those starting with a dot. Apart from the directory pointers . and .. there's the rc-files and other configuration files comparable to a kind of ini-files, reflecting user settings for various sessions and applications."}, {"response": 3, "author": "spring", "date": "Thu, Jul 24, 2003 (12:24)", "body": "Could anyone kindly tell me how to let \"ls -l\" output the file details sorted by file size? Or any shell tools or scripts are also welcome. If you have the GNU vesion of ls, use the -S option. Otherwise, pipe it through sort: ls -l | sort -n +4 Sort by the 5th field, numeric, descending: ls -l |sort -k5rn,5"}, {"response": 4, "author": "spring", "date": "Thu, Jul 24, 2003 (12:30)", "body": "ls -l | sort -k 5n or better still, if u have gnu ls: ls -Slr unix conference Main Menu"}]}, {"num": 23, "subject": "cactus - Central Texas Unix Society", "response_count": 4, "posts": [{"response": 1, "author": "terry", "date": "Sun, Sep  8, 2002 (07:51)", "body": "Message-ID: From: Fiber McGee Reply-To: fmcgee@spamcop.net Organization: CMyQOP User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.0+) Gecko/20020515 X-Accept-Language: en-us, en MIME-Version: 1.0 Newsgroups: austin.general,austin.internet Subject: CACTUS newsletter for May 2002 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Lines: 590 NNTP-Posting-Date: Wed, 15 May 2002 16:23:38 CDT X-Trace: sv3-h46PimIzikHYw7klBHoqmCEjfB9vgccO4uykuAB+bxgfdnKEs8gsh/lhsZhf8C8biShz0R+s1q3/don!syNQxp6uhUD8Yus6JMWvXl/bKYjci0ZsWbLBptaEGmk7NA== X-Complaints-To: abuse@GigaNews.Com X-DMCA-Notifications: http://www.giganews.com/info/dmca.html X-Abuse-Info: Please be sure to forward a copy of ALL headers X-Abuse-Info: Otherwise we will be unable to process your complaint properly Date: Wed, 15 May 2002 21:23:38 GMT Capital Area Central Texas UNIX Society CACTUS Newsletter Volume 18, Number 5 - May 2002 Contents: * [1]May meeting: Argus Systems PitBull LX * [2]April Meeting Report * [3]Our Newsletter E-Mail List * [4]CACTUS System News * [5]Review: Knoppix Run from CD Linux * [6]Building Mozilla from source on FreeBSD * [7]Membership Report * [8]CACTUS Officers and Contacts * [9]CACTUS Sponsors * [10]Meeting Location and Map May Meeting Program Larry Thompson and members of RFD Associates will describe the the Argus-Systems PitBull LX product which provides e-Commerce environments with the most formidable level of protection from the inside out. [11]Return to top April Meeting Report by Lindsay Haisley The meeting was reasonably well attended by current CACTUS meeting standards. All officers were present except for Newsletter editor Bob Izenberg. Bob had expressed some concern regarding the equity for our deal with Tomorrow's Technologies under which we're trading use of our portable class C network for a 1U rack space with Tomorrow's Tech. I expressed Bob's concern to the meeting, and suggested that while the deal was in Tomorrow's Tech's favor as far as the value of these resources is concerned, Mike Erwin and his comrades have been solid supporters of CACTUS, hosting linux.cactus.org faithfully for several years without complaint while we juggled our Sparc 10 between reluctant sponsors. A motion was made, seconded and passed to stand by our previous decision on the exchange. Membership chair Luis Basto gave a very interesting presentation and demonstration of Knoppix - a bootable, runnable Linux on a CD-ROM. Knoppix Linux boots from CD-ROM, loads the kernel and sets up part of its filesystem on a RAM disk. It will find and use swap space on a hard drive if it's available, or grap swap space on a DOS hard drive partition. It runs X (KDE desktop) if memory is available, or simple command line interface if not. It will run in CLI mode with as little as 16M of memory present. Everyone at the meeting was fairly well impressed. Knoppix is free, covered under the GNU public license, and is available from [12] http://www.knopper.net/knoppix . The seat of Knoppix development is in Germany and many of the online information is in German, however English translations of some pages are posted as well, enough to make it worthwhile for English speaking Linux enthusiasts to pay the site a visit. The main presentation of the evening was given by Jennifer Green from Veritas, who gave us a grand tour of the Veritas Foundation Suite consisting of the Veritas Volume Manager and filesystem. Veritas is an Enterprise level storage management solution with many capabilities. She presented each element of of the Veritas suite with ample graphics and capably answered many questions from members present. Those interested in pursuing a further study of Veritas can learn more at the Veritas website at [13] http://www.veritas.com . [14]Return to top Knoppix - a complete runnable Linux on CD-ROM by Luis Basto Achtung! Knoppix is a full blown Gnu/Linux distribution runnable from a CD without need for installation. As such, it is ideal as a Linux learning platform, a rescue system, a security scanner, or for doing presentations and demos. It is developed by Klaus Knopper, http://www.knopper.net . Much of the stuff is in German but there are many links in English. Since the whole package is open source under Gnu GPL, the sources can be found at http://www.knopper.net/knoppix/sources . One of its most powerful features is the automatic recognition of many types of graphic cards, sound cards, SCSI and other peripheral devices. It uses transparent decompression to pack lots of software onto a single CD. For example, the Version 2.1 which I demo'ed at the April CACTUS meeting contained over 1.7 GB of software compressed onto a 700 MB CD. True to most Linux distros, Knoppix is quite frugal in system requirements. It works with a 486 PC or better, can run with only 16 MB in text mode and 82 MB or more for X (KDE or Gnome), with 128 MB recommended. Since it runs from CD, it needs to have a bootable CD-ROM or DVD."}, {"response": 2, "author": "terry", "date": "Sun, Dec  8, 2002 (17:00)", "body": "* 2002 Meeting Schedule Feb 21 Main Program: Mike Erwin (formerly of OuterNet Connection Strategies, an Austin-area ISP) speaks about computer forensics and security. Chip Rosenthal, of Unicom Systems Development. Recipient of Austin Chronicle's 1997 Tech Award for \"Best Usenet Watchdog & Helpful Guy\" for his fight against SPAM, will discuss his battle with a California company that is attempting to commandeer his primary domain that he has been using since 1990. March 21 Chad Kissinger of OnRamp (onr.com) will speak about HR 1542, the Tauzin/Dingell bill. April 18 Tim Trader, of VERITAS Software, will be presenting VERITAS Volume Manager and FIle System. These products work across multiple UNIX platforms and provide a great deal of functionality. The soon to be released version will now run on Linux and AIX. May 16 Larry Thompson and members of RFD Associates will describe the the Argus-Systems PitBull LX product which provides e-Commerce environments with the most formidable level of protection, from the inside out. June 20 Ralph Kirkley, head hunter extrodinaire with decades of experience in the Austin area July 11 Eric Raymond: best-selling author and noted open-source activist. August 15 Lindsay Haisley presents a tutorial on Courier (\"Qmail on steriods\"). A mail tranfer agent. Sept 19 Steve Dobbelstein & Kevin Corry: Enterprise Volume Management System (An IBM Open Source Linux project). EVMS includes a kernel space runtime and a modularized engine that provides APIs for the creation, configuration, management, and deletion of volumes, volume groups, partitions and disks. October 17 Kevin P. Dankwardt: Embedded Linux. The speaker is founder and President of K Computing, a Silicon Valley training and consulting firm. He has spent most of the last 9 years designing, developing, and delivering technical training for such subjects as Unix system programming, Linux device drivers, real-time programming, and parallel-programming for various organizations world-wide. He received his Ph.D. in Computer Science, in 1988."}, {"response": 3, "author": "terry", "date": "Sun, Dec  8, 2002 (17:05)", "body": "The above posted for historical reasons. I'm glad Chip kept his name. I wasn't so lucky when Realtime's Bob Gustwick and George Wenzel stole the name austin.com from me with email forgery. It was back in the days when there weren't adequate safeguards to protect domain name owners. And since they had root power over my server and could easily spoof my email address, it was easy for them to send a domain name transfer to Network Solutions transferring the name austin.com from spring.net and me to realtime.net and them. Gustwick and Wenzel are thieves and cheats. And liars. Because they deny it to this day. Gustwick and Wenzel are some of the worst scumbags in the Internet business because they stole my livlihood from me. At least Skip Rosenthal kept his domain name. I wasn't so lucky."}, {"response": 4, "author": "terry", "date": "Sun, Dec  8, 2002 (17:16)", "body": "There is a historical record of my ownership of austin.com in the O'Reilly book on domain names which I have a copy of in my library. They actually had a book back then which listed domain names! I'll scan it and post a copy on my website as evidence. unix conference Main Menu"}]}, {"num": 24, "subject": "fixing permissions in drool and geo", "response_count": 1, "posts": [{"response": 1, "author": "terry", "date": "Sat, Sep 28, 2002 (08:11)", "body": "su-2.05# chown -R geo:geo geo su-2.05# cd geo su-2.05# ls -l | grep ^d drwxr-xr-x 3 geo geo 512 Apr 5 21:25 Conference drwxr-xr-x 3 geo geo 512 Mar 23 2002 Iki drwxr-xr-x 3 geo geo 512 Dec 21 2001 JohnVolos drwxr-xr-x 2 geo geo 512 Apr 7 11:39 _vti_cnf su-2.05# cd John* su-2.05# ls -l total 1 drwxr-xr-x 19 geo geo 512 Apr 20 14:01 Public su-2.05# cd Public su-2.05# ls -l total 17 drwxr-xr-x 3 geo geo 512 Jun 24 03:03 Astronomy drwxr-xr-x 3 geo geo 512 Apr 8 14:12 Eq_gifs drwxr-xr-x 3 geo geo 512 Apr 8 14:12 GEOLINKS drwxr-xr-x 3 geo geo 512 May 17 10:53 Geology drwxr-xr-x 2 geo geo 512 Jun 30 02:05 JULIE drwxr-xr-x 7 geo geo 512 May 2 07:39 Portal drwxr-xr-x 3 geo geo 512 Apr 8 14:12 SANFRAN drwxr-xr-x 3 geo geo 1024 Aug 18 22:26 Seismology drwxr-xr-x 3 geo geo 512 Apr 8 14:12 SoundEffects drwxr-xr-x 3 geo geo 512 Apr 8 14:12 buttons1 drwxr-xr-x 3 geo geo 512 Apr 8 14:12 magnetosphere drwxr-xr-x 3 geo geo 512 Jun 29 00:56 marcia drwxr-xr-x 3 geo geo 512 Jun 27 22:40 music drwxr-xr-x 3 geo geo 512 Apr 8 14:12 newgeo drwxr-xr-x 5 geo geo 512 Apr 18 00:46 pages drwxr-xr-x 3 geo geo 1024 Jun 26 14:15 various drwxr-xr-x 3 geo geo 512 Apr 8 14:12 xmas su-2.05# . unix conference Main Menu"}]}, {"num": 25, "subject": "chmod", "response_count": 5, "posts": [{"response": 1, "author": "terry", "date": "Thu, Oct 10, 2002 (05:31)", "body": "10/30/1999 Backend A quick and dirty chmod Tutorial...... Print Article By Anthony Baratta (AnthonyB) 'chmod' or \"change mode\" is the *NIX way of changing file permissions. It is VERY different from DOS/Windows, if you are new to *NIX or always wondered what \"drwxr-xr-x\" meant read on..... Where Windows/DOS machines realistically have one set of file permissions: Read/Write - Archive - System - Hidden and then add on User Permissions to the files and directories; *NIX breaks the permissions into three groups, 1 - user, 2 - group, 3 - world. When you do an ls -la you might see the following: [user@linux sites]$ ls -la drwxr-xr-x 16 root root 1024 Oct 20 19:56 . drwxr-xr-x 9 root root 1024 Sep 5 22:56 .. drwxr-xr-x 9 foo user 1024 Sep 5 22:56 dir1 drwxr-xr-x 9 foo user 1024 Sep 5 22:56 dir2 drwxr-xr-x 9 foo user 1024 Sep 5 22:56 dir3 -rw-r--r-- 9 foo user 1024 Sep 5 22:56 file1 -rw-r--r-- 9 foo user 1024 Sep 5 22:56 file2 -rw-r--r-- 9 foo user 1024 Sep 5 22:56 file3 All the gobblygook at the beginning of each line is the file permissions. Note: To *NIX, directories are just special files. In order to allow someone to 'traverse' the directory tree, the user must have eXecute permissions on the directory even if they have read/write privileges. Within each set of permissions (you, group, world) there are three permissions you can set: Read - Write - Execute. Therefore when you set the permissions on a file you must take into account 'who' needs access. Here's a stripped down list of the options chmod takes: (for more info do a man chmod at the command line.) chmod [-R] ### -R is optional and when used with directories will traverse all the sub-directories of the target directory changing ALL the permissions to ###. Very useful but use with extreme caution. The #'s can be: 0 = Nothing 1 = Execute 2 = Write 3 = Execute & Write (2 + 1) 4 = Read 5 = Execute & Read (4 + 1) 6 = Read & Write (4 + 2) 7 = Execute & Read & Write (4 + 2 + 1) Of course you need a file name or target directory. Wild cards * and ? are acceptable. If you don't supply the -R, with the target directory, the directory itself will be changed, not anything within it. Again you must supply the #'s in a set of three numbers (you, group, world). To make a file readable and writable by you, and only read for your group, and no access from the world,it would look like: chmod 640 filename The result would look like... -rw-r----- 9 foo user 1024 Sep 5 22:56 file3 To make all files that end in .cgi read-write-executable for you, and read-executable for everyone else: chmod 755 *.cgi The result would look like... -rwxr-xr-x 9 foo user 1024 Sep 5 22:56 file3.cgi -rwxr-xr-x 9 foo user 1024 Sep 5 22:56 file4.cgi Here are some standard permissions for files and directories: [This is a gross approximation, a place to start. Your sysadmin maybe really loose with permissions or a really tight-butt. Your mileage *will* vary.] For Apache running as nobody:nobody.....Most Perl Scripts should be set to 755. Most HTML files should be set to 644. And most data files that must be written to by a web server should be 666. The standard directory permission should be 755. Directories that must be written to by a web server should be 777. If the web server is running within the same group as you....Most Perl Scripts should be set to 750. Most HTML files should be set to 640. And most data files that must be written to by a web server should be 660. The standard directory permissions should be 750. Directories that must be written to by a web server should be 770. Your home directory should be 700. If you are operating a ~username type server, the public_html directory should be 777. (You may also need to open up the home directory to 755.) Side Note: any file name that starts with a '.' is invisible to the webserver when a directory list is generated. This is a quick and dirty way to hide a file."}, {"response": 2, "author": "terry", "date": "Thu, Oct 10, 2002 (05:35)", "body": "Recursively Change File Permissions(#12) You can recursively change file permissions using the find and chmod commands. For example, to change the file permissions for all files in the private directory and all of its subdirectories so that no one but you has access use the following commands. $ CD ~/private $ find . -name '*' -exec chmod go-a \\{\\} \\; To change the file permissions starting from your home directory so that others have no access use the following command.{\\} \\; $ Find ~ -name '*' -exec chmod o-a Be careful if you have a web page. If others have no access to the web page files then they can't load your pages in their browser. You can use more advanced features of the find command to search for files and change permissions. For example, search for any files that have write access for the group or others and remove them. $ Find ~ -perm -002 -exec chmod o-w \\{\\} \\; $ find ~ -perm -020 -exec chmod g-w \\{\\} \\;"}, {"response": 3, "author": "terry", "date": "Thu, Oct 10, 2002 (06:18)", "body": "there's a nice way of doing it (which escapes me at present), which I'm sure someone will point out as I write this but.. for i in `find /web directory/ -print` do if [ -d $i ] ; then chmod 755 $i else chmod 644 $i fi done should do it.. Replace the chmod commands with \"echo\" commands to test it works as you want it to first.. Donncha. adam beecher wrote: > > Say I have a web directory, and I want to recursively chmod all the > directories 755 and all the files 644, how would I go about that then then? > http://www.linux.ie/pipermail/cork/2001-March/001799.html"}, {"response": 4, "author": "terry", "date": "Thu, Oct 10, 2002 (06:33)", "body": "ab> Say I have a web directory, and I want to recursively chmod ab> all the directories 755 and all the files 644, how would I go ab> about that then then? find . -xtype d -exec chmod 755 {} \\; find . -xtype f -exec chmod 644 {} \\; I've used -xtype so that symbolic links won't be followed. -- \"Pity has no place at my table.\" -- Dr Hannibal Lecter"}, {"response": 5, "author": "terry", "date": "Thu, Oct 10, 2002 (06:38)", "body": "Actually it should be find . -type d -exec chmod 755 {} \\; find . -type f -exec chmod 644 {} \\; unix conference Main Menu"}]}, {"num": 26, "subject": "Dual Boot Windows and UNIX (ie. FreeBSD)", "response_count": 2, "posts": [{"response": 1, "author": "terry", "date": "Sat, Nov  2, 2002 (06:54)", "body": "This is kinda ancient, but I read about one guys woes with this issue: At any rate, this more or less brings us to the present day, and to this brand-new TP560X sitting on my desk. The TP560X comes standard with a 12.1\" TFT active-matrix screen, 32MB RAM and a 4GB hard drive. I upgraded the memory to 96MB total and replaced the hard drive with IBM's 6.4GB unit. Thus begins my tale of woe. The intent was to partition the HD with 2GB for Win95 (to accommodate all the bloatware that comes with using Microsoft Office, plus an assortment of grant proposals and other such Windows-centric stuff) and partition the remaining 4GB for FreeBSD. New ThinkPads come equipped with a great resource: The whole system can be restored from a CDrom and rescue floppy that comes with the unit. So, confident beyond a shadow of a doubt that this utility would work, I installed the new hard drive, formatted it with DOS FDISK and FORMAT, and prepared myself to re-install windows. Moral 1: Never...NEVER give up your old MS-DOS floppies. Moral 2: Never...NEVER trust any Windows rescue floppy to work. I have a Sony PCMCIA cdrom that comes with an Adaptec SlimSCSI card. The documentation with the cdrom swears it supports every CDrom known to man -- IDE, SCSI, PCMCIA or otherwise. Except apparently the SlimSCSI. I couldn't get that part to work at all. I wasn't surprised, though, so I had a backup plan: use the Win95 Floppy Disk install set (28 floppies!) made from the *previous* TP560C's, two years ago. My next error was to install FreeBSD before installing Win95. The lore on the internet (and in the handbook, and the FAQ) is that to dual-boot Operating systems both OS's must have their root partition located completely within the first 1024 cylinders. This was a problem, as FreeBSD was probing the drive and coming up with a number around 13,000 cylinders. No matter what size I made the root partitions nor how I broke things up, I could not dual-boot. Only the first OS on the drive was found. The problem of course is that the BIOS in the laptop wanted to change the disk geometry. I got around this problem (after several days of punting around) by formatting the DOS partition using the DOS FORMAT program. This is mentioned in fine print in the FAQ, but might deserve more neon lights around it, at least for my sake. Once I did this, the whole 6.4GB drive suddenly appeared to have approximately 950 cylinders. My root partitions could be as big as I wanted, so I made Win95 take up the first 2GB just as I originally planned. Now the operation began to move much more quickly. I installed FreeBSD from the 2.2.7-RELEASE cdrom, then added a whole mess of ports and made some of my personal customizations I've learned with time. Then I installed Windows from the 28 floppy-disk set over an afternoon, reinstalled \"Booteasy\" from the tools directory and I had a basic dual-booting laptop."}, {"response": 2, "author": "terry", "date": "Sat, Nov  2, 2002 (06:54)", "body": "The above from http://www.daemonnews.org/199810/mobile.html unix conference Main Menu"}]}, {"num": 27, "subject": "Gnome Linux FreeBSD answer to Windows", "response_count": 0, "posts": []}, {"num": 28, "subject": "Help fo authors and administrators at Spring Websites", "response_count": 7, "posts": [{"response": 1, "author": "terry", "date": "Sun, Jan  5, 2003 (05:41)", "body": "I'll lead off with an email I got from WX5U - Mickey McInnis - this morning. OK, I think I understand the FTP stuff now. You're using ProFTPD. The config file is /usr/local/etc/proftpd.conf. You have directives in there of DefaultRoot ~/../ siteadmin DefaultRoot ~ !wheel ## May need a change. See below. This uses \"chroot\" to \"jail\" all FTP users not in the \"wheel\" group or in \"siteadmin\" in their own home directory and not let them cd to any higher directories. I think members of the \"siteadmin\" group will also end up in their respective www.SITENAME.TLD directory (Actually the parent of their home directory.) If I understand the structure correctly, wx5u needs to be in the \"siteadmin\" group. That way, ftp will chroot me into /usr/home/sites/www.tcares.org. Then I can go under there via ftp, but can't access any other web site files via ftp. I read something that makes me think this configuration may do some wierd things if a user is in siteadmin, but not in wheel. See http://proftpd.linux.co.uk/docs/faq/faq_full.html It says \" If two DefaultRoot directives apply to the same user, ProFTPD arbitrarily chooses one (based on how the configuration file was parsed).\" If I understand correctly, this means that a user in \"siteadmin\", but not in \"wheel\" may end up in ~/../ or in ~ with the configuration you have currently. If I understand correctly, this needs to be DefaultRoot ~/../ siteadmin DefaultRoot ~ !wheel,!siteadmin This way siteadmin FTP users end up reliably in the parent of their home directory. Wheel ends up in root, and anyone else ends up in their home directory. So, I think if you add wx5u to siteadmin group, and change the /usr/local/etc/proftpd.conf file, no files need to be moved. You still need to chmod -R wx5u /usr/home/sites/www.tcares.org/web chgrp -R site35 /usr/home/sites/www.tcares.org/web You know, this is actually sort of fun. Thanks again, 73 de WX5U ."}, {"response": 2, "author": "terry", "date": "Sun, Jan  5, 2003 (06:21)", "body": "Here's and example site I just created for wolf www# pwd /usr/home/sites/www.midnightwolf.com www# ls Merchant2 email web wolf cgi-bin logs web2 www# ls -l total 14 drwxr-xr-x 3 wolf site24 512 Dec 30 09:47 Merchant2 drwxr-xr-x 2 nobody nogroup 512 Jan 5 06:15 cgi-bin drwxr-xr-x 3 vpopmail site24 512 Dec 30 09:47 email drwxr-xr-x 2 root nogroup 512 Jan 2 13:49 logs drwxr-xr-x 9 wolf site24 512 Jan 5 06:16 web drwxr-xr-x 10 root site24 512 Dec 31 07:07 web2 drwxr-xr-x 4 wolf site24 512 Jan 5 06:18 wolf www# cd web www# ls -l total 24 -rw-r--r-- 1 wolf site24 383 Jan 5 06:16 .htaccess drwx------ 2 wolf site24 512 Jan 5 06:16 _private drwxr-xr-x 4 wolf site24 512 Jan 5 06:16 _vti_bin drwxr-xr-x 2 wolf site24 512 Jan 5 06:16 _vti_cnf -rw-r--r-- 1 wolf site24 1754 Jan 5 06:16 _vti_inf.html drwxr-xr-x 2 wolf site24 512 Jan 5 06:16 _vti_log drwxr-xr-x 2 wolf site24 512 Jan 5 06:16 _vti_pvt drwxr-xr-x 2 wolf site24 512 Jan 5 06:16 _vti_txt drwxr-xr-x 2 wolf site24 512 Dec 30 09:47 images -rw-r--r-- 1 wolf site24 72 Dec 31 07:23 index.html -rw-r--r-- 1 wolf site24 2453 Jan 5 06:16 postinfo.html www# If Ias in the wolf directory I would have to cd ../web to get to the website general files."}, {"response": 3, "author": "Moon", "date": "Sun, Jan  5, 2003 (13:25)", "body": "What is the IP (numerical) address for the message board at the DWG? This problem is three days old and I am not the only one who has it. Can someone help?"}, {"response": 4, "author": "terry", "date": "Sun, Jan  5, 2003 (13:32)", "body": "Ann Haker is the one to ask. The ip address returned from a ping is 63.119.175.10."}, {"response": 5, "author": "Moon", "date": "Sun, Jan  5, 2003 (14:51)", "body": "I did ask Ann and this is her response: I don't know it. The only IP number I have leads to Spring.net, not Austen.com."}, {"response": 6, "author": "KarenR", "date": "Sun, Jan  5, 2003 (14:59)", "body": "Evidently, Terry has set up this new server has having \"virtual\" domains, which is why the numeric address plus the conference stuff didn't work. I don't particularly like that setup and don't see any advantages to anyone, whether at spring.net., austen.com or firth.com. It's nice to have unique IP addresses, in case the DNSes go haywire, as backup. This way, everyone is stuck at spring.net's main page."}, {"response": 7, "author": "terry", "date": "Wed, Jan  8, 2003 (17:47)", "body": "We had virtual domains on the old server also. The only exception to this was the the austen.com which had a unique ip address. We only get 5 ip addresses and two of them have to be used for our name servers. One if for all our virtual domains (64.106.200.50). That only leaves two spare ip addresses which I need to use for another name server for backup purposes. We woul d have to pay a lot more for a large block of ip addresses unix conference Main Menu"}]}, {"num": 29, "subject": "crontab", "response_count": 1, "posts": [{"response": 1, "author": "terry", "date": "Tue, Apr  8, 2003 (10:05)", "body": "crontab The most important thing about the crontab manpage is the order in which the timing controls occur. There are six columns at the start of every command in the crontab. These are the time controls. It is normal to see asterisks in most of the time controls; an asterisk means \"every\" or \"this column doesn't matter\". But that doesn't make a lot of sense until you grok the time controls: 1. Minute (0-59) 2. Hour (0-23) 3. Day of the month (1-31) 4. Month of the year (1-12) 5. Day of the week (0-6 with 0=Sunday) So a command starting with \"5 * * * *\" means \"run this commmand at 5 minutes after the hour every hour, every day, every month, every week.\" A command starting with \"30 5 * * *\" means \"run this command at 5:30 am every day every month every week\". A command starting with \"*/5 * * * *\" means \"run this command every 5 minutes every hour, every day, every month, every week.\" (But some versions of cron will not allow this syntax.) unix conference Main Menu"}]}, {"num": 3, "subject": "BSDI Unix", "response_count": 15, "posts": [{"response": 1, "author": "terry", "date": "Tue, Jun  3, 1997 (10:32)", "body": "respond 3 Ted, we're continuing our discussion we started in linux here Look what happened when I tried your commands. And look at how much space there is now and compar it to the space that there was on here a couple of days ago which I documented in the linux topic. I probably outht to move that linux top cstuff here in the next post. barton:~ su Password: barton# rm -f /home/var rm: /home/var: is a directory barton# rmdir -f /home/var rmdir: illegal option -- f usage: rmdir directory ... barton# rmdir /home/var barton# mv /var /home/ ln -s /home/var /var mv: /var/run/printer: Operation not supported mv: /var: Device busy barton# barton# barton# rm -f /home/var rm: /home/var: is a directory barton# cd /home barton# ls ab5ks charles home1 mirna submit alice child internic moira tcarlin allan cidneye janc mouse tedchong ally crosby jdaniel nike teklay amy davros ka6atn paul terry awork deanna kaffeine pcmattic tmp baygolf des ldarj pelles tvpc beverly dutchman main richard var bhg geoff manual rus wave bob gfriz matt scotth wes bubbi golftravel max scottk www candace great mhessel spif zen cchang greg michaelt stacey barton# df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/sd0a 9727 5459 3781 59% / /dev/sd0f 705727 134991 535449 20% /home /dev/sd0h 198335 177773 10645 94% /usr /dev/sd0g 63535 49486 10872 82% /var barton# Now 82% is ok,, but threre's just so much more available on home."}, {"response": 2, "author": "terry", "date": "Tue, Jun  3, 1997 (11:11)", "body": "Topic 2 of 11: 'Linux' Response 8 of 15: Paul Terry Walhus (terry) Sat, May 31, 1997 (11:48) 7 lines barton:~ df we're real short on hard disk space right Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/sd0a 9727 5459 3781 59% / /dev/sd0f 705727 78737 591703 12% /home /dev/sd0h 198335 177773 10645 94% /usr /dev/sd0g 63535 57297 3061 95% /var barton:~ Topic 2 of 11: 'Linux' Response 9 of 15: Ted Chong (tedchong) Sun, Jun 1, 1997 (09:42) 4 lines Terry, which directory is short of space on barton? /home is only 12% used, still have about 600MB left :-) Topic 2 of 11: 'Linux' Response 10 of 15: Paul Terry Walhus (terry) Mon, Jun 2, 1997 (08:20) 11 lines /var and /usr are both real full. I need to add to /var because that's where a bunch of mail keeps overflowing and filling up the hard drive. I could use a lot more room there. I'm thinking about plugging in a 3 gb Quantum and setting it up as the second drive on www. Any tips on upgrading that system (step by step procedure). I guess the first would be to plug it in and run BSDI's disk formatting program. Then link it to /var as a filesystem. Topic 2 of 11: 'Linux' Response 11 of 15: Ted Chong (tedchong) Mon, Jun 2, 1997 (09:15) 9 lines For short run you can link /var to /home since /home has 600MB of space. To do this, just run run on shell: mkdir /home/var ; ln -s /home/var /var make sure /var is not there in the first place. Topic 2 of 11: 'Linux' Response 12 of 15: terry (terry) Mon, Jun 2, 1997 (11:22) 1 lines cheech Topic 2 of 11: 'Linux' Response 13 of 15: Paul Terry Walhus (terry) Mon, Jun 2, 1997 (11:27) 13 lines I did this: barton# mkdir /home/var ; ln -s /home/var /var barton# df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/sd0a 9727 5459 3781 59% / /dev/sd0f 705727 77061 593379 11% /home /dev/sd0h 198335 177773 10645 94% /usr /dev/sd0g 63535 57773 2585 96% /var barton# Do I need to reboot for it to take effect now? Topic 2 of 11: 'Linux' Response 14 of 15: Ted Chong (tedchong) Mon, Jun 2, 1997 (19:33) 20 lines Re: /var on barton Terry, I just did an 'du' on /var at barton and found below directories has eaten the most space: 17818 ./www 60920 ./account 22142 ./log 8218 ./webdocs You don't have to reboot barton. What I found you have not link /var to /home/var, to to this, see below step-by-step: 1. rm -f /home/var 2. mv /var /home/ 3. ln -s /home/var /var This will make a link from /var to /home/var Topic 2 of 11: 'Linux' Response 15 of 15: Paul Terry Walhus (terry) Tue, Jun 3, 1997 (09:20) 5 lines OK I'll try that now. Check and see if this works ok?"}, {"response": 3, "author": "terry", "date": "Tue, Jun  3, 1997 (11:25)", "body": "The problem I'm having now, Ted, is that when I open elm to read my mail it shows that I have no mail because it's still looking for /var/mail and my mail is now in /home/mail or /home/mail/var or something. How do we fix this?"}, {"response": 4, "author": "terry", "date": "Tue, Jun  3, 1997 (13:33)", "body": "barton:/var su Password: barton# cd / barton# mv var vartemp mv: var: Device busy I tried this because you said to: su cd / mv var vartemp ln -s /home/var /var And after this, doing an ls -l / to see /var -> /home/var Why Device busy??? Should I try again?"}, {"response": 5, "author": "terry", "date": "Tue, Jun  3, 1997 (13:48)", "body": "As it stands now, I go to elm and see no mail because it's over in the new directory."}, {"response": 6, "author": "terry", "date": "Tue, Jun  3, 1997 (14:08)", "body": "Now it says: barton# mv var vartemp mv: rename var to vartemp/var: Not a directory barton# What next Ted?"}, {"response": 7, "author": "tedchong", "date": "Sat, Jun  7, 1997 (09:26)", "body": "All solved already, should be able to get email from barton"}, {"response": 8, "author": "terry", "date": "Sat, Jun  7, 1997 (11:51)", "body": "Cool deal, Ted to the rescue again. Thank you for helping the Spring!"}, {"response": 9, "author": "tedchong", "date": "Mon, Jun  9, 1997 (08:50)", "body": "Now back to BSDI unix, how can one disable ICMP packets? What I mean is disable someone from the Internet ping your host...."}, {"response": 10, "author": "terry", "date": "Mon, Jun  9, 1997 (12:55)", "body": "We had a ping atttack once and we tracked down the ping bomber and had his account terminsated. It was senseless . Why pick on little Spring? Some firewalls and deal with ping attacks. And you might search omse the the Interneet security websites. There may be something about his in our security topic. These are also calle Denial of Service attacks."}, {"response": 11, "author": "tedchong", "date": "Mon, Jun  9, 1997 (19:17)", "body": "Ping attacks happen everywhere, they don't spare small host. I have looked into my router's manual and found something like \"Disable ICMP redirects\" and may be it gave me some clue..."}, {"response": 12, "author": "terry", "date": "Tue, Jun 10, 1997 (10:28)", "body": "Wow. What a change: barton:~ df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/sd0a 9727 5482 3758 59% / /dev/sd0f 705727 138039 532401 21% /home /dev/sd0h 198335 177863 10555 94% /usr /dev/sd0g 63535 1 60357 0% /var Now, Ted, will you please tell us exactly how to do this step by step so we can learn from this? Good job, Ted! Now, do you know how to restart our realaudio server which has stopped. I know, but I left my notes at home today. We run realaudio for our http://www.childrenstory.com website."}, {"response": 13, "author": "tedchong", "date": "Wed, Jun 11, 1997 (10:15)", "body": "Re: df and ln It's a simple task, I went to /var (at barton), became root, then move all files and directories to /home/var by using \"mv /var /home\", then make symbolic links by using the ln command to each directory in /home/var, for example: ln /home/var/spool /var/spool will link /var/spool to /home/var/spool. The actual way is to make /var point to /home/var but due to some files used by the system it can't be done without a reboot. more info at \"man ln\" For readAudio, let me take a look tomorrow (it's 12am now in Singapore)."}, {"response": 14, "author": "terry", "date": "Wed, Jun 11, 1997 (11:19)", "body": "The Realaudio thing is a hot issue. Email ces@well.com and ask him how to restart it."}, {"response": 15, "author": "terry", "date": "Wed, Jun 18, 1997 (23:54)", "body": "Dave Thaler set up a \"most recent postings\" are today and I'm testing it. Take a look at the \"main menu\" and you should see this post, or you will if it's still recent. unix conference Main Menu"}]}, {"num": 30, "subject": "publicwebstations.com - helping Katrina victims", "response_count": 2, "posts": [{"response": 1, "author": "cfadm", "date": "Sun, Sep  4, 2005 (18:49)", "body": "The Opportunity: Older computers, Pentium 2 level or above, can run as Firefox web stations (or kiosks), requiring only 128mb or ram, a CD-ROM drive, a network card, and access to an Internet-connected network. Schools, libraries, agencies, and businesses could easily and quickly provide free public web stations to assist those displaced by the hurricane. The computers needed are available in abundance for free or minimal cost, and many organizations have an excess of these older computers with no use for them. The technology needed to turn them into web stations is both free and effective, being based on the Linux operating system and the Firefox web browser. A single file is downloaded and burned to a CD-ROM, placed in the CD-ROM drive of the computer, and then the computer is booted from the CD-ROM. The computer quickly boots up directly to a Firefox web browser window, not requiring any keystrokes or skills to get there. A working web station would take no more than 5 minutes to set up, and requires no ongoing maintenance except in the case of hardware failure. In case of any difficulties, the machine is just rebooted. The Vision: Our goal is to help create a grass-roots network of independent organizations and individuals who, by following the instructions on this website, can create and run free public web stations both for those made homeless by the hurricane Katrina and for the aid workers helping them. Long term, we believe this project can help to create the tools for immediate volunteer efforts to place public web stations in accessible areas after any major disaster, anywhere in the world. Rather than needing to be coordinated centrally, these efforts can be undertaken at the grass-roots level by individuals in affected areas."}, {"response": 2, "author": "cfadm", "date": "Sun, Sep  4, 2005 (18:49)", "body": "http://www.publicwebstations.com/vision.html unix conference Main Menu"}]}, {"num": 31, "subject": "regular expressions", "response_count": 1, "posts": [{"response": 1, "author": "terry", "date": "Wed, Dec  7, 2005 (22:47)", "body": "http://www.regular-expressions.info/ best all around reference unix conference Main Menu"}]}, {"num": 32, "subject": "pine", "response_count": 5, "posts": [{"response": 1, "author": "terry", "date": "Mon, Mar 13, 2006 (03:18)", "body": "pine may not be installed by default on your system."}, {"response": 2, "author": "terry", "date": "Mon, Mar 13, 2006 (03:23)", "body": "The pine information center is at http://www.washington.edu/pine/ Here are a good set of del.icio.us bookmarks on pine. http://del.icio.us/Deflexion.com/Messaging/Clients/Pine Here's the U of Delaware page on \"How to Use Pine\" http://www.udel.edu/topics/e-mail/pine/"}, {"response": 3, "author": "terry", "date": "Mon, Mar 13, 2006 (03:24)", "body": "CNNMoney.com (FORTUNE Magazine): How I Work save this Marissa Mayer, VP, Search Products and User Experience, Google, said: 'I use Gmail for my personal email ... but on my work email I get as many as 700 to 800 a day, so I need something really fast. I use an email application called Pine' to HostingProviders/Gmail Pine Messaging/Pine Gmail Google GTD ."}, {"response": 4, "author": "terry", "date": "Mon, Mar 13, 2006 (03:25)", "body": "Better set of pine bookmarks than the above. http://del.icio.us/Deflexion.com/Pine"}, {"response": 5, "author": "terry", "date": "Mon, Mar 13, 2006 (03:31)", "body": "Tracking Your Incoming Messages The Procmail log is located in $LOGFILE, which, if you used the instructions in Step 4 above, is $HOME/Procmail/pmlog. The contents of $LOGFILE depend on the values of $VERBOSE, $LOG, $LOGABSTRACT, and $TRAP, which you can read about in the Environment section of the procmailrc man page. You can use many commands to view the log including cat, more, less , and my favorite, tail, which I discuss in the next section. Following Your Log with tail -f If you want to continually follow your log, you can use tail -f $HOME/Procmail/pmlog To start tailing with the last 50 lines of the log, use tail -n 50 -f $HOME/Procmail/pmlog which on my system is equivalent to tail -50 -f $HOME/Procmail/pmlog To quit live monitoring your log, type CTRL-C If you want to be able to run other commands while the tail is happening, use & to put it in the background: tail -f $HOME/Procmail/pmlog & To learn about tail, see man tail. Another tail option is to use Paul Chvostek's ProcMail Log Watch (pmlw), which is an \ufffdawk script that tails your procmail log file, summarizing results and giving you basic traffic statistics, live.\ufffd unix conference Main Menu"}]}, {"num": 4, "subject": "Sun OS", "response_count": 0, "posts": []}, {"num": 5, "subject": "Solaris", "response_count": 1, "posts": [{"response": 1, "author": "terry", "date": "Thu, Aug 20, 1998 (16:08)", "body": "Sun is \"giving away\" promotional copies for non-commercial use - one only needs to pay the price of media & shipping. And it's both Intel and SPARC versions of Solaris. Details at: http://www.sun.com/developers/solarispromo.html unix conference Main Menu"}]}, {"num": 6, "subject": "sendmail", "response_count": 3, "posts": [{"response": 1, "author": "terry", "date": "Sun, Feb  9, 1997 (21:24)", "body": "Here's how to send email on just one line: To send gail the file foo.txt in a mail message with the subject \"Hi, gail\" try: elm -s 'Hi, amy' amy That also works for regular (BSD-style) mail, eg mail -s \"hello there\" amy"}, {"response": 2, "author": "terry", "date": "Mon, Nov 24, 1997 (19:11)", "body": "Within a script, I'm using a command like this to send e-mail: /usr/sbin/Mail Works fine, except that if the destination happens to be a local one (on the same system) then there is no domain name on the From: address when it arrives in that person's mailbox. If the recipient picks up their mail via POPmail while dialed in through another ISP, and then tries to reply, the reply fails due to the absence of a domain name. So to force the domain name to appear in the From: address do this (echo \"From: myname@mydomain Subject: some subject \"; cat messagefile) | /usr/lib/sendmail -t Is there a better way?"}, {"response": 3, "author": "terry", "date": "Tue, Dec 15, 1998 (09:44)", "body": "Date: Sun, 13 Dec 1998 15:59:36 -0500 From: Wietse Venema Subject: Wietse's Postfix (was VMailer) software release The Postfix mail system is to be released as open source code via the IBM AlphaWorks web site on December 14th, 1998. The URL is: http://www.alphaworks.ibm.com/ (N.B. this site uses javascript) Postfix is my attempt to provide an alternative to the Sendmail program, which probably delivers billions of email messages daily. Postfix attempts to be fast, easy to configure, reliable and secure. The source code is released at no cost and with no strings attached. You are encouraged to install/use/enhance/sell Postfix anywhere. After the initial release by IBM, the Postfix software is expected to evolve under control by its users. Future releases are expected to happen from outside IBM. As the original author I will coach the Postfix evolution for a while. Additional information is available via a collection of web sites dedicated to the Postfix software: http://www.postfix.org/ Among others, these sites carry software that was contributed by the Postfix alpha testers, a small list with Postfix errata, and information about Postfix mailing lists. Happy Postfixing! Wietse unix conference Main Menu"}]}, {"num": 7, "subject": "grep", "response_count": 4, "posts": [{"response": 1, "author": "ian", "date": "Wed, Feb 19, 1997 (23:49)", "body": "grep is an excellent place to start experimenting with \"regular expressions\". After get some experience with grep, I suggest you look at awk (or perl, but awk is easier to start with). A good awk book is \"The Awk Programming Language\" by Aho, Kernighan and Ritchie, ISBN 0-2-1-07981-X (Addison-Wesley Publishing Company). Another good book is the O'Reilly book, \"Awk and Sed\". If you really get interested, you can look at sed and lex. One big difference between grep and the other tools is that grep can find and display strings that match a regular expression, but the other tools can also edit the strings that you find."}, {"response": 2, "author": "terry", "date": "Sun, May 25, 1997 (11:19)", "body": "What's the best reference on using grep itself? What's the best unix book that you know about?"}, {"response": 3, "author": "tedchong", "date": "Fri, May 30, 1997 (21:25)", "body": "Reference on using grep: on any unix shell type \"man grep\" So far the best unix books are from O'Reilly at http://www.ora.com See http://www.ora.com/catalog/prdindex.html for the full index and prices."}, {"response": 4, "author": "terry", "date": "Sat, May 31, 1997 (11:56)", "body": "Are there any good website tutorials that you know of? For grep and UNIX in general and BSDI specifically? unix conference Main Menu"}]}, {"num": 8, "subject": "Windows NT 3.51 and 4.0", "response_count": 3, "posts": [{"response": 1, "author": "ian", "date": "Thu, Feb 20, 1997 (00:00)", "body": "You can make a Windows NT system look a lot like UNIX if you get software from Mortice Kern Systems ( http://www.mks.com/) . MKS Toolkit -- UNIX utilities, including make MKS Source Integrity -- RCS with extensions MKS lex and yacc I use all three, along with Watcom C/C++ 10.6. I would suggest that anyone unfamiliar with MKS software buy the Toolkit first and see if they want the other stuff later. Alternatives (which I have not tried) MKS Toolkit -- Thompson Toolkit MKS lex and yacc -- GNU flex and bison Based on literature from Thompson Toolkit, I prefer MKS Toolkit -- MKS goes for POSIX compatibility, Thompson goes for maximizing what you cna do on a PC, even if this is not POSIX. I have developed software based on shell scripts and UNIX utilities and programs in lex and yacc and C. I find that you can write on a PC with MKS and run the stuff without change in UNIX, and you can develop in UNIX and run without change on a PC (NT, 95, OS/2, or MS-DOS)."}, {"response": 2, "author": "terry", "date": "Sun, May 25, 1997 (11:20)", "body": "Ian, have you run across any good telnet servers for NT 4.0?"}, {"response": 3, "author": "tedchong", "date": "Fri, May 30, 1997 (21:48)", "body": "Re: NT 4.0 Server Terry, I too have to move grom unix to NT due to company's policy. I recently setup a NT 4.0 server with IIS, service pak 3 and MS proxy server as our company's Internet host. I find the setup very easy and straight forward and it works immediately when I plug the network cables to it (may be I have lots of experience on unix already so I find it easy on NT). The problem of using NT is if something doesn't work, it is hard to find out why. Also it is hard to implement custom implementation or change the programs (no source codes). For unix (Linux/FreeBSD/sun) I can easily change the source and do what I want. For telnet server for NT, loo at: http://www.ataman.com/products.html http://www.pragmasys.com http://www.seattlelab.com/ A good link for NT resources is: http://www.primenet.com/~buyensj/ntwebsrv.html unix conference Main Menu"}]}, {"num": 9, "subject": "AIX", "response_count": 0, "posts": []}]}