Well, where to start? I'm going to confine myself to the Debian/Ubuntu way of things and not really bother to mention where other distributions or flavors of unix or whatever might vary, because I can't think of a very compelling reason personally ever to use those other systems; they all pretty much do the same thing on a very wide range of hardware.

Contents

The file system:

File systems are strictly hierarchical structures with directories, which contain files, and files, which contain data of one kind or another (but a directory is also itself considered a file). In DOS/Windows, each physical device was associated with its own directory tree, hence C: , D: , etc. It turns out that that's a pretty dumb way to do things because it's useful to abstract yourself away from the physical details of the computer. Hence every running linux system has a filesystem with a single root, represented as a forward slash— ‘/’

In the unix world, so far as possible, everything that can be conceived of is represented as a file somewhere in the file system—even things that do not actually represent a section of data on a physical storage medium as such. This has many advantages for system administrators and programmers.

The root directory of an Ubuntu install contains the following directories (by default):

binStands for binary. Contains many of the various small programs you'll be using as commands at the command line, like cp, mv, rm, and so on.
bootContains kernel images, GRUB, and the GRUB configuration. More on booting to come.
cdromThis is a symbolic link to the cdrom drive. More on that later.
devDevices. Files in this directory represent physical (or virtual) devices like hard drives, ram chips, virtual terminals, audio outputs, and so on.
etcConfiguration files. Linux (thank god) does not use a central registry-type application to store configuration data for various applications. That means the developers have the freedom to format plaintext configuration files in a manner of their choosing, making life easier and more accessible for us. One thing that might be counterintuitive though is that you might think all the files associated with a particular program would be found in the same place in the file system. Not so.
homeWhere user home directores are found.
libLibrary. Contains code libraries for system-level applications
lost+foundThis has something to do with data recovery in the event of a crash.
mediaMedia is where storage media like CDs, DVDs, flash drives, hard drives partitions (other than the one that contains the root file system, et cetera, are mounted by default.
mntStands for mount, its function has been taken over by media and it is now vestigial.
optDon't know what this stands for but certain apps or services seem to insist on installing themselves there.
procProcesses. This is filled with directories that represent running processes. That's part of the unix "everything is represented as a file" ethos. The name of each directory in here is the process id # for process it represents.
rootThis is the home directory for the root user. More on that later.
sbinSystem binaries. Like bin; more programs but these considered unlikely for the end-user to want to use.
srvDon't know what this is for but I've never actually seen anything in it. Probably vestigial.
sysSystem. Contains representations for various system functions. All sorts of stuff in here; hard to generalize but for example on my mac I can control the fan by writing to a file somewhere in there. Probably you could find the temperature sensors represented in here somewhere.
tmpTemporary Files. Lots of applications find it convenient to put files here that nobody intends to keep.
usrUser. This contains stuff for user applications, e.g. /usr/bin/firefox is the executable that starts firefox, /usr/lib/firefox contains firefox code libraries, there's a directory in there somewhere that contains firefox plugins as well, and so on
varVariable. Here are contained files that are considered likely to change often, like system log files and stuff. In more primitive times, with older storage hardware and more primitive filesystems, it sometimes paid to have this on a separate piece of hardware or a different kind of filesystem.

Now, the end-user is not expected to have any need to go looking around in any of these areas of the file system. Each user gets his own home directory which is supposed to contain all data specific to him, including user-specific application configuration data, documents, media, and whatever. The home directory for the current user can be invoked as /home/user and also by the shorthand tilde '~'. So to change to your home directory on the command line, cd ~ is sufficient, and to change to a subdirectory thereof, cd ~/subdirectory is good.

The path to a file is, pretty much just like in DOS, its parent directories starting from root and separated by forward slashes, e.g. if I have a file in a Documents directory which is in my home directory I could refer to it by /home/jim/Documents/file or by ~/Documents/file. Also relative paths mean relative to the current working directory and do not start with a slash; hence if I am currently "in" my home directory I can refer to that file as Documents/file

The boot process.

The BIOS reads the Master Boot Record on some hard drive, which directs it to load the Grand Unified Boot Loader (GRUB). This is the program that provides that boot menu that allows you to choose between windows and linux. GRUB is configured to point the computer at the "kernel." The kernel is stored as a file in /boot, namely /boot/vmlinuz-2.6.27-9-generic. The kernel is loaded in to memory and starts initializing hardware. This is the part of the operating system that directly controls access to the hardware and offers processor time, ram, and other services to running processes.

When the kernel is ready it calls the init program. Init is given a process id of 1 and its job is simply to load other programs according to the configuration of the system. What follows is regulated by the concept of runlevels, which is just a stepwise order for bringing various processes online. Runlevel 1 is also known as 'single-user mode' and it's what you get if you boot into recovery mode—weakly analogous to 'safe mode' in windows. Runlevels 2-5 are for our purposes identical and are what the computer spends most of its time in. The point is that init starts running scripts (think of "batch files" from DOS) contained in /etc/rc#.d, where # refers to a given runlevel, in order—first everything in rc1.d, and then, if you're not running in recovery mode, everything in rc2.d. These scripts start everything from networking to the display server and desktop environment, the audio server, and a bunch of other things. For ease of configurability, the contents of /etc/rc#.d are actually symbolic links to scripts that are contained in /etc/init.d. That way if you don't want a particular service or program to start at boot-up, you remove the appropriate links, but you still have the script itself, so you can start the program or service on your own whenever you want. The command-line programs update-rc.d and invoke-rc.d make it easier to edit and control these scripts, respectively. For example, if I want to stop the program that serves webpages, called apache2, I can run sudo invoke-rc.d apache2 stop at the command line. To see all the services you can contorl with this command you only have to look in /etc/init.d, because that's where they all are.

File attributes: users, groups, and permissions.

Every file (including directories) is associated with an 'owner' and a 'group' and has associated permissions that tell the system who is authorized to read, write, or execute the file. You could therefore represent a file's permissions in a table like this:
readwriteexecute
owner***
group**
everyone*
giving, obviously, nine possible binary attributes and therefore 29=512 possible states.

Here 'owner' refers to the user who owns the file, group refers to members of the group that 'group-owns' the file, and everyone means everyone. 'Group-ownership' just is what it is; every file is associated with one group and those are the users to whom the group permission attributes apply. Another way to represent the permission state of a file is using octal digits. In the example table, the owner can read, write and execute, the memberrs of the associated group can only read and write, and everyone can read. Representing the table in binary, writing a 1 for each * and a 0 for each blank cell, would just give you 111 110 100, which in octal is 764. Commands chown and chmod (think 'change owner' and 'change mode') deal with setting these attributes.

Besides directories and regular files there are other types, only one of which I'll mention because I don't know enough about the rest. The symbolic link is simply a file that points to another file (including directories). These are very useful for a bunch of stuff—as discussed above the init system makes heavy use of them. Permissions and ownership on symbolic links are meaningless; the system just looks at the permissions on the file that the link points to. The ln command creates links; generally you will invoke it as ln -s target link where target is the thing to be linked to, link is the path to the link, and the -s specifies a symbolic rather than a 'hard' link, whcih i don't want to get in to now.

Virtual terminals, X, and the desktop environment

By default the system maintains so-called virtual terminals, accessible by pressing CTRL-ALT-F#, where # is the number of the terminal to which you want to switch. Switching to terminals 1-6 will give you a login prompt and if you log in you get a command shell, which can be useful if the window server shits itself. VT 7 is for the window server by default and so when the computer starts and shows you the login screen or the desktop, you're using VT 7.

The desktop environment is created by a bunch of programs layered on top of each other (called 'abstraction layers' in computer science because each layer provides an interface to the one above it without having to reveal the specifics of its implementation). The X Window System draws windows on the screen and accepts input from mice and keyboards and so on. That is all it does. It does not draw scroll bars or title bars, it does not move windows, it does not resize windows, it just draws them. Programs called window managers do stuff like drawing window borders, title bars, etc. The one we're using is compiz, wich also makes use of the graphics hardware to provide the effects like the spinning cube and so forth. The next layer on top of that is the GNOME desktop environment. This provides common configurability and interoperability between GNOME applications, and it also provides the panels, menu, clock, and all that. The point about abstraction layers is that any of these applications could be replaced with some alternative, adn the other layers woudln't care. It is quite possible to run GNOME wiht a different window manager, like iceWM or blackbox or whatever else you want, or to ditch GNOME in favor of a competing desktop environment like KDE; in either case, the X Window system doesn't even notice. and to drive the point home, programs in the 'application layer' don't care what's underneath them, either. I can run firefox on any window manager I want, or in any desktop environment. Of course, there are reasons to choose some over others, like features, or in the case of GNOME or KDE, interoperability and communication between different applications, or the ability to, say, change the fonts in all GNOME applications from one single GNOME preference menu.

Another thing about X is that it's network transparent. A GUI application basically connects to the X server and ask it to draw windows on its behalf. This can be done irrespective of whether the application is running on the same machine as the X server. In practice that means I can run a program on a remote computer but connected to my local display (monitor, keyboard and mouse). In a way, the client/server terminology is reversed; I could log in to a remote computer, connecting to its ssh "server" which makes me an ssh "client" and run a program on that remote computer, but then that application is referred to as an X "client" which is connected to my local display "server."

The shell

Read the following, excerpted from my law school personal statement:

An adage holds that "a picture is worth a thousand words," but even a cursory examination finds that principle lacking. Imagine having to ask for the salt at the dinner table by drawing a picture, and it should be clear that a mere three words—"Pass the salt"—have as much power as you could ask for. There is a caveat: your fellows have to speak English for this to work. If they don't, you are quite justified in resorting to gesturing or drawing pictures in order to make yourself understood.

At the risk of boring my readers, I cite computing as a more substantial example of what I'm talking about. These days the end-user generally interacts with a personal computer through the familiar graphical user interface (GUI); menus and buttons are depicted on screen for us to point and click. These interfaces are designed to be as predictable as possible; menus and icons are clearly labeled, and so on. This is undoubtedly a good thing, generally speaking, because it means that anyone can use a computer without any foreknowledge whatsoever. But the GUI is inherently limited. If the interface designer did not anticipate the use to which you might like to put the programs with which you're interfacing, you're out of luck. Even if he did anticipate a particular use, it might exist in a large number of permutations such that there's no way he could include them all within the GUI paradigm. For example: you want to copy some files from your music collection on your computer to a portable drive, but only those files whose names start with f but do not end in s or n, and are more than two years old but less than five. In one English sentence, I have managed to fully, powerfully, flexibly specify what you want to do, and this regardless of whether your collection contains ten files or ten million. But in the typical drag-and-drop paradigm of most file browsers, you'd have to personally identify, one at a time, which of the files fit the criteria. If only there were a way to express your desires to the computer using language. Of course, there is. Command interpreters (or "shells") are programs that accept linear, symbolic instructions; in short, real sentences. Ironically, they can seem primitive, probably because they remind us of a time before GUIs made computers accessible to all. And they do come with the same caveat as appears in the preceding paragraph: you have to learn to speak this language—its verbs, their syntax, and so on.

A shell is a program designed to accept typed commands. There are many out there but I only concern myself with the most prevalent and the default for Ubuntu. It's called bash. Basically there are built-in commands that the shell provides and there are also lots of little command line programs that the system provides; they are used in the same way. This is just like DOS back in the old days; commands take zero or more mandatory or optional arguments, along with options (called switches in DOS; remembre typing dir /w? The /w is the switch.) In bash the options are preceded by a - or a -- and there has to be a space between teh options and the commands, unlike DOS. Most of the DOS commands you might remember are naturally going to have equivalents: instead of move, copy, and dir, we have mv, cp, and ls. Two good commands to know are man command where command is what you're tryign to find out about, and apropos string where string is a search string or subject for which you are trying to find relevant commands. The man page for each command will give you the syntax and a description of the available options. It's usually pretty straightforward. Typing something like apropos copy will give you a list of commands that might have something to do with copying things, one of which is cp. Then typing man cp will fully specify how to use that command. At the end I'm going to just give a list of commands that are commonly-used and not exhaustively describe them—that's what the man pages are for.

Some more things about the command line: Tab completion. If you are typing a command or a path pressing tab will cause the shell to attempt to complete the command or filename for you, which can make things considerably more pleasant. If there are several matches, pressing tab again will show them, again, useful if you forgot exactly how a command is spelled or if the path name is very long and you don't feel like havin to type it exactly.

Running tasks in the background: Sometimes you want to issue a command that wil run for a while but still use that shell terminal to issue more commands. This can be used for any command that does not require further shell input from you, but comes up especially in the case of GUI programs. Say you want to start firefox from the terminal but still use that terminal for other stuff. You must run it in the background by following the command with an &, eg firefox &. Try it yourself with and with out to see what I mean.

You 'escape' a character with the backslash. This is to strip a character of its special meaning; for example to refer to a filename with a space in it you have to escape the space, or else the shell will take the space to mean that you have come to the end of the name of the file. eg. a file is named 'Rocky V.avi.' To refer to it as an argument to a command, you must type Rocky\ V.avi. In actual practice you would probably use Tab-completion to get this done, but the concept of escaping must be mentioned. There's also quoting, strong and weak. Strong quoting uses single quotation marks and means that everything inside the quotes is to be taken perfectly literally (except of course a single quotation mark). In weak quoting, stuff inside double quotes is taken mostly literally, but some characters still retain their special meaning, most notably the $, which is a sigil, the character that prefaces the name of a shell variable. More about this kind of thing later. The point is, in referring to the file 'Rocky V.avi.' you can type Rocky\ V.avi "Rocky V.avi" or 'Rocky V.avi'.

Globbing should actually be familiar to you to some extent from DOS; it's a way of referring to multiple files, usually as arguments to some command. Asterisk refers to any number of any characters, while question mark refers to any one character. Hence R* would match Rocky or R or Rasf830832 or whatever, but R? woudl match only R1 Ra Ro or something. To, say, copy everything in a given directory, you could do cp directory/* otherdirectory Globbing is related to the concept of regular expressions, which you can look up on wikipedia and elsewhere and is a valuable tool.

Many commands in the shell make use of input and output. echo "Hello, World" will output that phrase to the terminal in which you are working. grep hello searches the input it receives for the word hello and outputs all the lines of that input that contain it. If you ran grep hello jsut like that, it would sit there and wait for you to provide some input, but more often we want grep to work on the output of some other command. The shell facilitiates this with the redirection operators, the most important of whcih is the pipe: | . It takes the output of one command and feeds it to the next command. Hence ls | grep puppy would list the files in the current directory, but instead of outputting the result, would send it as input to grep, which would then output all lines, if any, that contain 'puppy.' You can keep on stringing commands together like this as much as you want to suit your needs; the great thing about the command line is that it's infinitely flexible.

TCP/IP and Services

Various programs offer services to other computers, both on the local network and over the internet. The TCP/IP protocol uses the concept of ‘ports’ to arrange client-server interactions. A metaphor for this might be that the IP address of a given computer (or ‘host’) is the mailing address for a building with many doors, and each door is a port. Programs run all the time and are configured to answer one and only one particular door. Here are some services we have running on the computer in the living room (they end in d for daemon, just a name for a program that runs in the background and either does something periodically or waits around for service requests from other programs):
programport numberBrief Description
apache2default 80, we use 8080This is the webserver; it answers http requests with web page data
sshd22Secure SHell. Allows remote login, among other things.
nfsd2049Network File System. Allows remote users to mount directories from the server on to their file systems. Another example of abstraction; once mounted, applications don't even realize that that branch of the file system isn't on local hardware.
Now you know why the router is configured to forward requests on port 22 to the living room computer. That router uses Network Address Translation to allow multiple hosts on our local network to use the internet even though we only get one IP address (at a time) from verizon. When it receives a request for some kind of service, the router by default just drops it, because, for one, it woudln't know where to send it, and two this keeps us safe. It does let us specify that requests on a given port be forwarded to a given computer, which presumably is configured to handle them. That's the business behind port forwarding. Now, Verizon helpfully blocks some low port numbers to keep customers from doing certain things. It blocks port 80, which is the default http port, so I have had to use an alternate port, in this case 8080, and configure my internet addresses to make requests on that port.

A further note about NFS: this is what I use to mount your home directory from the computer downstairs to the one upstairs. You're allowed to choose anything you want for a mount-point, even a directory that already has stuff in it, so I mount it right over your local home directory, meaning I don't have to change any settings at all on your computer except to tell it to do the mount at boot time—and if for any reason the mount fails, your preexisting home directory is still sitting there where it was since the system was installed, ready to serve as a backup. You couldn't log in the other day because I left the permissions for that local home directory in a bad state, but that's fixed now.

Our configuration

A typical out-of-the-box home network configuration consists of a bunch of computers connected to a NAT-type gateway/router, which is then connected to the internet via a modem of one kind or another. The gateway/router also provides DHCP services to the computers on the network.

DHCP stands for Dynamic Host Control Protocol and all it does is hand out IP addresses to computers that ask for them. The IP address space is 32 bits, represented as four dot-separated octets, eg 192.168.1.104. IP addresses that start with 192.168.1 are reserved for private use on local area networks.

DNS stands for Domain Name System and is responsible for mapping human-readable names to IP addresses. The Internet Service Provider, Verizon in our case, provides DNS servers which are in turn linked recursively to higher-level DNS servers, all the way up to the ‘root’ servers, associated with so-called top-level domains like .com and .org and so on.

So, on the typical home network, a computer will boot up, broadcast a request for DCHP service, and be assigned a DHCP lease and its own IP address that is unique on that local network. At the same time, the DCHP server on the router will tell the computer the address for Verizon's DNS servers so the computer can successfully look up internet domains while, say, browsing the web.

Our configuration is somewhat different. The topography is simliar: one router/gateway, a network switch, and a wireless switch. But the router's DHCP server is turned off in favor of one running on the central computer. That computer also runs its own DNS server. The reason is simple: by running my own DCHP server I can control exactly what it does; I can order it to hand out the same address to a given computer every time, actually there are a shitload of options I can call on. The biggest motivator for doing this has been the ability to boot computers over the network, either as so-called ‘thin clients’ or as fully functional, but diskless hosts. The only potential downside to this is that if the livingroom computer is not on when another host comes on, there won't be anyone to hand out IP addresses. But since that computer is serving web pages and downloading torrents and stuff anyway, there's never a good reason to turn it off.

Every device has a hostname. The living room computer is called tivo, your computer is dad-desktop, and so on. Some automatic discovery software makes each host on our local network available on the .local domain; practically speaking this means that you can refer to the computers as tivo.local, dad-desktop.local, and so on where appropriate.

Regarding the file system on the computer downstairs (tivo): I've already described how user home directories exist in /home. There are now three hard drives on that computer, one with three partitions each and two with one. The first hard drive is the boot drive and contains the root partition, a swap partition, and another ext3 data partition that is mounted on /home. The other two are mounted on /media/A and /media/B and contain all the stuff that I want commonly available to everyone: the music, movies, TV, application files. I placed symbolic links to these directories in everyone's home folders so as far as they are concerned, they never have to leave their home folders to get what they want—but none of the data are stored in duplicate.

Commands

The Advanced Bash Scripting Guide teaches one how to write shell scripts, but since a script is simply a series of commands, there is extensive information and example usage for a lot of commands. The guide also treats thoroughly the topics of shell variables, globbing, regular expressions, job management and background tasks, and I/O redirection. This stuff is probably not as complicated as I'm making it seem in a short article bereft of examples—the guide has the examples.

Below I'll give a table with a couple dozen commands and some brief comments about them. I will not include usage examples, syntax, et cetera, because you have man pages and the aforementioned guide for that. Many of these get much more useful when invoked with some of their options, so you do have to read the man pages. You should feel free to play with these commands on your own computer; if you never invoke root permissions (using sudo) you're totally safe (except maybe losing your own files, of which you don't have many at this point, and which anyway I just backed up somewhere safe), and even if you do, what's the worst that could happen? Even if you somehow wreck your install we could just do another one; you won't be able to hurt the other computers, because you don't have root access for them (it's hard enough keeping track of the changes I make on that computer; it would be a real nightmare if, say, you and/or beefchip also had root access to it). Also, Part 4 of the guide referenced above contains long clickable lists of commands; just browse through them and you will have some idea of which are going to be most useful or common.
commandcomments
general stuff
echoWrite arguments to standard output
lsList contents of a directory to output
cdChange current working directory
catShort for concatenate, which is somewhat confusing; outputs contents of a file or standard input; read its man page to see what I mean
nanoEasy-to-use terminal-based text editor
grepPattern filter
sshSecure shell login; log in to remote host shell
pingthrow some IP packets at a remote host and see how long it takes to get a response
lessshow a file or standard input one screenful at a time, so you can actually read the thing. Called less as a pun on "less is more;" the more program is predecessor to less; unix programmers love shit like that apparently.
manMaybe the most important command, it looks up manual pages for commands. Also often provides usage information for configuration files fount in /etc—try man /etc/hosts or any of the other files you find in there, eg.
aproposAttempt to find the name of a command based on a search string; eg if you didn't know cp was the copy command, apropos copy would show you.
pslist running processes
killKill a process with the given process id (pid)
jobslist your running background jobs
file commands
cplike DOS copy
mvlike DOS move; also for renaming
mkdirlike DOS md; creates directories
chownchange file and group ownership
chmodchange file permissions
lncreate links
du, dfdisk usage. get usage information for files, directories, and mounted filesystems
some administrative commands
sudoShort for Super User Do, issue commands as another user, root by default. This one is rather important because without it you cannot do even the simplest administrative tasks. Most of the time if you try to do something that you will need sudo for, the shell will just tell you that you need to be root, or say, 'permission denied,' which will be a hint to use sudo. Sometimes a program for which you needed to act as root will not be so clear about why it is failing...
invoke-rc.dactiviate init script to start, stop, restart some service or program
update-rc.dmanage the way init scripts are called at startup.
ifconfigget info and monitor network interfaces
mountmount some filesystem on to your filesystem, be it a physical storage device, network share, cd, disc image (.iso) or even some other branch of your filesystem. Alternatively, list currently mounted filesystems
finger, whoinformation about current users
passwdchanges a user's password
apt-getuse this to interface the ubuntu repositories; you can update the package list, install available upgrades, and install or remove packages. The GUI program Synaptic is a frontend to this command.
dpkginterface to the Debian package system, lets you install packages (contained in .deb files), remove packages, reconfigure them, et cetera. There is also a GUI frontend for this one.
rebootreboots
shutdownshuts down

This list is obviously nowhere near exhaustive but it should give you something to play with. Browse around in the /etc and /etc/init.d directories, look at the files in there, open the man pages, and you'll get an idea. There's also the fact that we can use programs like screen and NX to share a terminal over the internet for training purposes and you can follow along while I administer the system. This shit is both simpler and dramatically more complicated than it looks, and that is by design. The so-called unix design philosophy, and just good CS principles in general, demand that the programs on the system do only specific things, and do them well and with as little reliance on knowing the inner workings of the other programs as possible—lucky for us.