Dropbox security issues

I use Dropbox heavily for storing many files I’d like immediate and synchronous access to across various systems. I enjoy knowing that if I place a file on my Dropbox folder at home, it’ll be available on my laptop later, on my work machine, or on other machines I use remotely. It’s very convenient.

Dropbox is essentially offering a “public cloud” to its users to hold their files. This also means that our files are stored on servers that we do not entirely control. Because of this, I make a habit of encrypting all the data in my Dropbox folder (call me old school, but there it is…)

This does make things a bit difficult, as the files are not immediately available to me insofar as I have to decrypt them first (using eCryptFS). While that’s essentially a simple process, it is an extra step. It does however give me a measure of relief knowing that if there should be any problems with the public cloud and my files were to fall into the hands of a third party, at least they’d then have to decrypt them first.

It turns out that Derek Newton has found some security issues with Dropbox. Every Dropbox installation under windows places a config.db file under %APPDATA%\Dropbox (in Linux the file would be under ~/.dropbox/ and is called dropbox.db and host.db).

All an attacker would have to do is first gain access to a system running dropbox and copy the config.db file (or the dropbox.db and host.db file under Linux) and place them on his own system, in his own vanilla Dropbox (fresh) installation. As Derek puts it:

. . . the config.db file is completely portable and is *not* tied to the system in any way. This means that if you gain access to a person’s config.db file (or just the host_id), you gain complete access to the person’s Dropbox until such time that the person removes the host from the list of linked devices via the Dropbox web interface.  Taking the config.db file, copying it onto another system (you may need to modify the dropbox_path, to a valid path), and then starting the Dropbox client immediately joins that system into the synchronization group without notifying the authorized user, prompting for credentials, or even getting added to the list of linked devices within your Dropbox account (even though the new system has a completely different name) – this appears to be by design.  Additionally, the host_id is still valid even after the user changes their Dropbox password (thus a standard remediation step of changing credentials does not resolve this issue).

I understand that Dropbox is trying to keep their system as easy to use as possible and allow systems to easily sync files, but this requires a second look and perhaps a bit of re-engineering.

Check out Derek’s full post here. He agrees that the only remedy at this time is to encrypt the files in your Dropbox folders. I also recommend you read the discussion occurring after his post, as there’s a vibrant discussion on the topic and Derek responds to some of the more cogent remarks. In this matter, I agree with Derek entirely that Dropbox (while very convenient) is vulnerable to some trivial attack vectors.

Dropbox may decide that for convenience, this design merits keeping without correction. If they should decide that, I’m OK with that since I encrypt my data anyway. This does stand as a warning though to those that don’t, that your files could be at risk and you should either avoid putting any sensitive data in Dropbox folders, or employ encryption.

This will of course make mobile Dropbox clients useless, since I’m aware of few encryption programs available for Android (or iThings) that are also available to the desktop. I know eCryptFS isn’t available for mobile devices, which means that viewing files on my cell phone has been and remains impractical.

Cloud storage is nice and can be convenient, but it is critical to protect your data. If you’re interested in eCryptFS (which I prefer over other encryption applications such as Truecrypt), check out my older blog post here for a full explanation of it and how to implement it on Debian-based systems (such as Ubuntu, Linux Mint, etc.)

In addition to all this, other bloggers are talking about Dropbox’s use of deduplication to backup its data. What this means is, if two different users with their own Dropbox accounts store the exact same file to their respective folders, Dropbox will only backup one copy of that file and simply attribute the bits to both users.

While this saves Dropbox a ton of storage requirements for backups as well as bandwidth and money, it does so at your expense. It also means that they’re not really encrypting your data. As Christopher Soghoian mentions in his post,

The service tells users that it “uses the same secure methods as banks and the military to send and store your data” and that “[a]ll files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password.” However, the company does in fact have access to the unencrypted data (if it didn’t, it wouldn’t be able to detect duplicate data across different accounts).

This bandwidth and disk storage design tweak creates an easily observable side channel through which a single bit of data (whether any particular file is already stored by one or more users) can be observed.

If you value your privacy or are worried about what might happen if Dropbox were compelled by a court order to disclose which of its users have stored a particular file, you should encrypt your data yourself with a tool like truecrypt or switch to one of several cloud based backup services that encrypt data with a key only known to the user.

[Of course I recommend eCryptFS over Truecrypt, as I’ve stated before. I have not tried SpiderOak.com (referred above in the quote) — it may be a viable alternative to Dropbox, but I’d still encrypt my data.]

An interesting tidbit I’ve intuited here, is that Dropbox must be using deduplication on the fly in its client. For depulication to happen on the fly ahead of any file upload to the Dropbox network the client must indeed send key bits (also known as a hash) back to the Dropbox network for deduplication analysis.

Tests show (according to Christopher Soghoian’s post) that indeed dropping an identical file at a later time generates a slim fraction of network traffic back to Dropbox (from your computer) than a file that the Dropbox network has never seen before. This means that Dropbox is looking at all its data in aggregate across all users for duplicated bits, so that only the unique bits are backed up. If all users’ data were truly encrypted, this could not happen as encryption scrambles bits and would deny Dropbox the efficiency of bit level comparisons.

This means that users’ data are ultimately not really kept separate, and any encryption Dropbox may claim they apply is rendered useless since they’re blending user data on the back end to better manage and streamline their available resources (at the users’ expense).

Ultimately, what this really means is that you should have no expectation of privacy for any data you place on Dropbox’s network, unless you go out of your way to encrypt it prior to ever placing the data into the Dropbox folder. Encrypting your data prior to dropping into a Dropbox folder will truly render the data unique, forcing a full upload of the entire file as well as depriving Dropbox of any benefit of deduplication.

What this means for Dropbox is increased costs for servers and bandwidth to backup encrypted data since it cannot be deduplicated. While I understand Dropbox’s need to maximize profits and keep costs down, it shouldn’t be at the users’ expense.

Ultimately, no faith can be put in a public cloud to protect one’s own data. They’re great solutions for offsite storage as well as convenience, so long as proper precautions are taken. Encryption ahead of time is the best way to enjoy the fruits of this great technology.

Happy Birthday Linux!

Linux is 20 years old this year … first unveiled in September 1991 as version 0.01. Three years later,  in March of 1994, Linux 1.0.0 was released with 176,250 lines of code. The Linux Kernel soon grew into 14+ million lines of code, and now runs much of the world’s most important servers.

Of course, the kernel isn’t all of what we know as Linux. The GNU tools that surround it make up GNU/Linux and indeed the many distributions we know and love today, such as Red Hat, Debian, Slackware and Ubuntu.

Watch this video for a brief overview of the history of Linux.

Linux and open source software provide a valuable service for the community and businesses around the world. If you’ve ever enjoyed reading my blog or have ever enjoyed free & open source software, please consider making a donation to the Linux Foundation or to the Free Software Foundation.

30 Great Tutorials for using GIMP

The open source alternative to Photoshop, GIMP has some awesome abilities.

This site has 30 awesome tutorials with YouTube video links on all topics from photo manipulation to typography.

Been away a while …

…work has been busy … but I am going to try to devote more time to posts … sorry for the hiatus!

In the meantime, take this opportunity to check your backups and make sure your treasured data is duplicated, because everything has a failure date.

…more posts to come.

Helpful SSH commands: Part 1

I use many of these commands quite often. They’re immensely helpful when one wants to do a lot of remote work on a computer, or simply access resources on a remote machine (Linux or otherwise). (FYI: OpenSSH may be installed on Windows machines if anyone does not have a home Linux box to receive SSH sessions, and may use PUTTY to SSH from a Windows machine).

1. Using a Hauppauge HVR-1950 on one of my home machines, I often watch TV on my computer. If I ever want to watch remotely, I set VLC to stream the feed from the capture device (addressing it as a PVR on /dev/video0) using an OGG codec to the local IP address on a specific port number, then SSH to the same box from the outside with the following command:

ssh MyPublicIPAddress -p 12345 -L 6500: -o TCPKeepAlive=yes -o ServerAliveInterval=30

This command will SSH to my home public IP on my alternate SSH port and listen locally (client side) on port 6500 and forward the traffic requests (encrypted via the SSH tunnel) to my local server on on port 2503 (the port I configured VLC to stream on from the server with the Hauppauge device). When I launch VLC on my client and engage a network connection on on port 6500 (using VLC menu option ctrl-N) — poof — TV appears on my remote PC.

2. Local port redirects: Using this example:

ssh MyPublicIPAddress -p 12345 -L 7000: -D 15000 -L 6000: -o TCPKeepAlive=yes -o ServerAliveInterval=30

This is really an extension of concepts explained in item #1. With SSH you can forward any local port to any remote port on the other side, and funnel encrypted traffic to any computer running any OS on the SSH server side. So to VNC to a home machine from a remote location, simply SSH to your home machine (may require port forwarding and/or port knocking) and divert local port traffic to a remote server of your choice.

Note the -D 15000, allows for a SOCKS PROXY, which routes any application’s traffic using SOCKS out of your SSH’ed connection. For example, you can engage a SOCKS proxy on Firefox and then check your public IP address (by going to whatismyip.com) and you’ll see that while your real public IP may be one address, all your browser traffic is routed through your home connection.

There’s a lot to say on this subject (for example DNS translations are not routed by default through the tunnel) and other nuances. Google “SOCKS PROXY SSH DNS” for more info. This link may offer some further assistance.

There are other complications, in that it’s not easy to route operating system DNS requests (outside of the Firefox browser) through SSH, primarily because DNS runs on UDP port 53. I do not believe SSH will natively handle UDP port rerouting, though I’ve seen some creative solutions with netcat and mkfifo.

Also I have read (in the man pages) that Chrome supports SOCKS, I have read running Chrome with –proxy-server=<host>:<port>. For example when running the browser, google-chrome –proxy-server=”socks://foobar:1080″(with quotation marks), assuming that foobar is (assuming you used a -D option for dynamic port forwarding) and port 1080 was the destination port at the end of your -L port:host:port command switch. Check the google-chrome man page for more details.

In the same example used above (copied below for convenience), once I connect to my home SSH box via MyPublicIPAddress, I simply have to engage a VNC viewing session to my own client (localhost) on port 7000, and it’ll route to the IP address of my choice inside my home network, in this case VNC defaults to answering on port 5900. Multiple -L’s may be added to route many protocols (RDP, VNC, VLC, NFS, Web (80), even e-mail ports) to various machines on the local network.

In the example below I’ve added a second -L option routing traffic from my local client on local port 6000 to another machine ( in my home network on port 3389 (the Windows RDP port). In that scenario, running (in Windows) mstsc /v:localhost:6000 would allow me to RDP to my home machine, In Linux, I would run rdesktop localhost:6000.

ssh MyPublicIPAddress -p 12345 -L 7000: -D 15000 -L 6000: -o TCPKeepAlive=yes -o ServerAliveInterval=30

2a. An extension of the port redirect function of SSH in #2, I’ve written a post on dynamically adding port redirects without having to kill an SSH session to add the new redirects, instead add them on the fly: Click here for the post.

3. SSHFS. Not much to say about it here, simply check my full writeup on the subject.

There are many others that you can find on commandlinefu.com, including one using port knocking.

Happy Birthday ARPAnet! 40 Years!

40 years ago today at about 9pm on October 29, 1969 , two programmers sat 400 miles apart and sent information between their two computers. The first word, “LOGIN” was sent at that time. Well, actually only “LO” was sent, before the Stanford Research Institute computer crashed. They worked on the problem and about 90 minutes later at around 10:30pm, the full word LOGIN was sent to the other computer: and the precursor to what we now know as the Internet was born.

SRI, then known as the Stanford Research Institute, hosted one of the original four network nodes, along with the University of California, Los Angeles (UCLA), the University of California, Santa Barbara (UCSB), and the University of Utah. The very first transmission on the ARPANET, on October 29, 1969, was from UCLA to SRI.

ARPAnet evolved into what soon became the Internet that we all know, love and depend on for information and freedom of expression.

Enjoy some links on the subject.

Computer History Museum

The History of ARPAnet

The first schematic of the original ARPAnet

An article on the 40th anniversary including a map which overlays the schematic from the link above.

Wikipedia article on the subject.

NCurses-based Weather Application: Weather-util

When you want the current weather conditions without having to visit a graphically busy weather website, or without the benefit of a GUI (say working a shell), a great app will give you the weather conditions in no time, just by typing weather at the command prompt.

Simply sudo apt-get install weather-util, and set up the .weatherrc file, and you’ll have instant local weather, plus you can set up presets for weather at [work], [home] or [elsewhere], so you can get the weather for any city.

Google “weather-util” for more links on the subject. The application’s home page is here.

Here’s some sample output:

$ weather
Current conditions at Raleigh-Durham International Airport (KRDU)
Last updated Jun 04, 2008 - 01:51 AM EDT / 2008.06.04 0551 UTC
   Wind: from the S (180 degrees) at 10 MPH (9 KT)
   Sky conditions: mostly cloudy
   Temperature: 72.0 F (22.2 C)
   Relative Humidity: 73%
City Forecast for Raleigh Durham, NC
Issued Wednesday morning - Jun 4, 2008
   Wednesday... Partly cloudy, high 67, 20% chance of precipitation.
   Wednesday night... Low 96, 20% chance of precipitation.
   Thursday... Partly cloudy, high 71, 10% chance of precipitation.
   Thursday night... Low 97.
   Friday... High 72.

Ncurses-based Instant Messenger Client: CenterIM

For those that prefer detachable Screen sessions with multiple windows in shell and want to run instant message chat sessions in CLI without the hassle of Xwindows … CenterIM is for you.

CenterIM is a pretty robust instant messaging client that runs entirely out of your command prompt. Simply sudo apt-get install centerim and you’re ready to go. It takes a little getting used to, but all the files you need are held in your home directory under ~/.centerim . Every contact gets their own folder under .centerim and gets contact-specific chat history logs. The master config files are held in .centerim as well. The first time you run the application, it will show an options window allowing you to configure your preferences. If you delete config file, it will rerun the preferences dialog when you next run the application, however you can access and modify the options by hitting ‘g’ from the main chat window.

CenterIM supports ICQ, Yahoo!, AIM, MSN, IRC, Jabber, LiveJournal, and the Gadu-Gadu IM protocol as well. Anyone familiar with pico, nano or irssi will be right at home with CenterIM.

How to securely delete (UN)USED drive space & other system areas

With modern filesystems securely deleting files isn’t always easy, but one approach which stands a good chance of working is to write random patterns over all unused areas of a disk – thus erasing the contents of files you’ve previously deleted.

We all know that when you simply delete a file, it’s possible to recover it later. Sometimes this is useful, if you accidentally delete something important, but usually this is a problem, and you really want that file gone forever. I will explain here how to delete a file in linux securely and permanently, so it can never be recovered. In addition, I will show how to completely wipe previously-used (available) space which will often have complete files or file-remnants which can otherwise be recovered. This applies to hard drives, external USB drives, thumb drives, etc.

To wipe your available (free) disk space, you’ll want to install the secure-delete application. Not only will this application suite offer applications that will wipe files and free space, but it will also wipe your SWAP partition and your system memory (RAM). Wiping RAM is important for privacy as well, since many files are stored in RAM and can be retrieved even after the computer is shut down, right off the chip!

First, install the secure-delete suite of applications:

sudo apt-get install secure-delete

Then, to wipe your /home partition’s free space, for example:

sudo sfill /home

The sfill the program will fill up all free space on the designated mount point by creating a huge single file. The the contents of this file are written in a number of special steps – ensuring that all areas of the disk which were previously free have had their contents erased. Once completed, the large file is removed, restoring your free space. You can sfill any mount point. Type man sfill for more info and options.

The command to erase existing files is “srm”, short for “secure rm”. Simply type

srm filename

Where filename is the name of the file you want to securely wipe/delete. You can also use wildcards (e.g. srm filenam*)

To wipe your system’s memory (RAM) use this command:


SDmem is short for secure delete memory. You can run the command by itself, or with options. Type man sdmem for more info.

Similarly, sswap will securely wipe your swap partition. You must unmount your swap partition before using this command otherwise your system will likely crash. Once the wipe is completed, you can remount your swap partition. Type man sswap for more info. To wipe your swap space simply type:

sswap /dev/sda8

/dev/sda8 is an example. To find your specific swap device, simply type sudo fdisk -l, or cat /proc/swaps which will list your partitions and their device labels. Also to unmount your swap space, simply type sudo swapoff /dev/sda8 and to remount it type, sudo swapon /dev/sda8.

How to read EXT2, EXT3 and EXT4 partitions in Windows

Ext2Read is an explorer like utility to explore ext2/ext3/ext4 partitions. It also supports Linux LVM2. It can be used to view and copy files and folders. It can recursively copy entire folders. It can also be used to view and copy disk and file system images. It also supports external USB drives. Works on all recent versions of Windows.

Download it here.

Limit The CPU Usage of Any Process in Linux

CPULimit is an application for Linux that can limit the CPU usage of a process. It is useful if you want to restrict a particular application from taking up too much CPU resources and thereby crashing the system. This can also be useful when you need to run several intensive programs simultaneously.

This application runs on any distribution, but I’ll discuss its installation on Ubuntu:

sudo apt-get install cpulimit

Once installed, type this to restrict any already-running application’s CPU utilization:

sudo cpulimit -p PID -l CPU%

Where PID = the process ID and CPU% is the maximum percentage of the CPU allowed for use. For example:

sudo cpulimit -p 8992 -l 35

This will restrict process ID 8992 to no more than 35% of CPU’s availability.

(To see a list of your running processes you can just run the command TOP which will list your processes in order of CPU utilization).

Windows 64bit Explained

I found this thoroughly hilarious: Reason #43 why I use Linux. From Cup(Of T).

Look, it’s really not that hard.

Programs are still in the same place, in %ProgramFiles%, unless you need the 32 bit version, which is in %ProgramFiles(x86)%, except on a 32 bit machine, where it’s still %ProgramFiles%.

All those dll’s are still in %SystemRoot%\System32, just now they’re 64 bit. The 32 bit ones, they’re in %SystemRoot%\SysWOW64. You’re with me so far, right? Oh, and the 16 bit ones are still in %SystemRoot%\System – moving them would just be weird.

Registry settings are in HKLM\Software, unless you mean the settings for the 32 bit programs, in which case they’re in HKLM\Software\Wow6432Node.

So the rule is easy: stick to the 64 bit versions of apps, and you’ll be fine. Apps without a 64 bit version are pretty obscure anyway, Office and Visual Studio for example[1]. Oh, and stick to the 32 bit version of Internet Explorer (which is the default) if you want any of your add-ins to work. The ‘default’ shortcut for everything else is the 64 bit version. Having two shortcuts to everything can be a bit confusing, so sometimes (cmd.exe) there’s only the one (64 bit) and you’ll have to find the other yourself (back in SysWOW64, of course). And don’t forget to ‘Set-ExecutionPolicy RemoteSigned’ in both your 64 bit and 32 bit PowerShell environments.

Always install 64 bit versions of drivers and stuff, unless there isn’t one (MSDORA, JET), or you need both the 32 bit and 64 bit versions (eg to use SMO / SqlCmd from a 32 bit process like MSBuild). Just don’t do this if the 64 bit installer already installs the 32 bit version for you (like Sql Native Client).

Anything with a ‘32’ is for 64 bit. Anything with a ‘64’ is for 32 bit. Except %ProgramW6432% which is the 64 bit ProgramFiles folder in all cases (well, except on a 32 bit machine). Oh and the .net framework didn’t actually move either, but now it has a Framework64 sibling.

I really don’t understand how people get so worked up over it all.

[1] Ok, so there is a 64 bit version of Office 2010, but given the installer pretty much tells you not to install it, it doesn’t count.

Via Cup(Of T).


Remember . . .

That the present moment is more important than the next . . .


A few years ago, mounting remote filesystems between linux boxes was a mystery to me. I wrote this little HowTo to help others who are trying to mount files between two systems over a network when you want a secure, encrypted tunnel for all transfers between the two systems. In another page I will discuss how to mount WINDOWS folders from Linux, mapping them to local folders on the Linux box.

The cool thing about SSHFS is that on the server side (providing the mounted resource) , any mapped WINDOWS shares are available over SSHFS (presumably mounted with a mount -t cifs command) as well as any Linux folders and any other mounted resources on the Linux server. So using ONE linux box you could mount multiple WINDOWS shares as well as mount various NFS shares to different sub-directories on the Linux server and unify all the mounts under one parent directory on the primary Linux box. Then a simple mount to the parent directory holding the various mounts explained above would offer a unified directory structure of shares all available through one file server: encrypted & secure, all with SSHFS (yay!).

Rsync and SCP are out there, I know, but they are limited in scope. Sometimes you may want to access files on a remote server for editing purposes, to access them as though they were a local resource, securely; not just copy them (like an FTP connection) or like Rsync or SCP. It is sometimes desired to have more persistent, locally available resource that offers the benefits of an encrypted, private connection while maintaining the appearance of a locally available resource between the two computers: this is where SSHFS comes into play.

These instructions are Debian-specific (though not much different under Fedora).

WARNING: Make backups of any modified files PRIOR to modifying them, use this HOWTO document at your own RISK!

SSHFS allows one to mount directories over SECSH2 protocols, as implemented by OpenSSH: Essentially mapping a drive over SSH. It uses the FUSE user-space filesystem framework project which essentially lets any program (in this case SSH) to create a virtual filesystem. Debian (Sarge, Ubuntu, etc.) users should be able to install the SSH filesystem by typing the following command.

sudo apt-get install sshfs

NOTE: In the Ubutnu 10.04, I have found that the above command is sometimes insufficient (especially on 64bit systems). If you get any errors while trying the commands on this page (such as FUSERMOUNT errors), then instead of the above command, you need to type this:

sudo aptitude install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev comerr-dev libfuse2 libidn11-dev libkadm55 libkrb5-dev libldap2-dev libselinux1-dev libsepol1-dev pkg-config fuse-utils sshfs

Installing SSHFS also creates a user group known as FUSE. You will need to add your non-root user to this group by typing

sudo usermod -a -G fuse username

Once this is done, your non-root user will be able to access the resources made available by FUSE. Graphically in Ubuntu, you could go to System->Administration->Users and Groups, select the group “fuse” and then add yourself to this group.

Once you’ve added your regular non-root user to the FUSE group, mapping the remote resource over SSH is pretty elementary. However, you will only be able to execute the sshfs command as ROOT. Any mounting of any resource by ROOT will exclude it from your regular non-root user, so to allow your non-root user access to the remote resource you will need to first tell the FUSE module to allow for the option of non-root users accessing FUSE resources. To do so you need to add an option to the /etc/fuse.conf file. Simply . . .

sudo gedit /etc/fuse.conf and add the line below (or kedit if you use KDE. On my Ubuntu system, my fuse.conf was empty, so I added this one line and that’s all the file contains) . . .


This option will allow you to use -o allow_other in your SSHFS command which will allow your non-root user access to the specific resource you’re mounting. The above line (user_allow_other) must appear on a line by itself. There is no value, just the presence of the option. Once this is saved, you can then use the -o allow_other option in your SSHFS command and it will execute properly.

There is one more system file to edit. In order for Linux to see the virtual filesystems created by FUSE, the kernel needs to be aware of the existence of the FUSE module in your MODPROBE file. Again, run your favorite text editor,

sudo gedit /etc/modules and add the line below (or kedit if you use KDE) at the end of the file add . . .


On a line by itself, this simply tells Linux to load the FUSE module on boot. This will take effect on your next reboot. However, you probably don’t want to reboot right now to load the module in your current session, so we can force linux to load the FUSE module on the fly by executing the following command as ROOT or using SUDO.

sudo modprobe fuse

This will work for the duration of your session. Once your reboot your newly-edited /etc/modules file will properly load the FUSE module and the “sudo modprobe fuse” command won’t be necessary in the future.

At this point you have all of the tools necessary to mount a remote filesystem and map it to a local folder on your local machine. The SSHFS command requires a few key pieces of information.

  1. Username (do you have access to the remote machine? if so, what username, does NOT need to be the root user)
  2. IP or hostname (what machine are you connecting to? works via name, domain or IP)
  3. remote folder (what remote folder path do you want?)
  4. local folder to use (where to you want to virtually represent the remote contents?)

An example of a command to mount a remote system, including the above key elements, would look similar to this:

sshfs user@hostname:/path/to/remote/folder /path/to/local/folder -o allow_other -p 8000

A sample with the above variables filled in might look like this . . .

sshfs john@ /home/john/reports  -o allow_other -p 8000

In the above example, John is using the “john” user to connect to (his linux box at work) and mapping the /home/john/work/reports directory on his work server to his home server locally, to /home/john/reports.

The -o option allows for his non-root user (john on his home machine) to actually access the files on his home PC (without this, he would only be able to interact with the files as the ROOT user).

The -p 8000 option is not necessary. If your work box uses the default SSH port (22) then this option is not needed. However, many servers use different ports to avoid common portscans and hack attempts by random bots, so I made up the port 8000 as an example non-standard port number.

If/when you’re done with the mounted-folder, you can simply unmount it with the umount command.

umount /home/john/reports

(unmounting the local side disconnects the localized virtual resource.)

The main SSHFS command does not need to be typed over and over, you could easily create a short text file and make it executable so you can simply type the text filename and it will execute the long command.

Again use your favorite text editor (no need to do this as root).

gedit workmount

(or any filename you like) and type your long SSHFS command, and save the file.

Once saved, from command line type the following . . . (make sure you’re in the same path as the filename otherwise you will have to specify the full path to the filename)

chmod 700 filename

This will make the text file executable (like a batch file in WINDOWS). From command line you can simply type in the future . . .


. . . and you’d be prompted for the password to the remote computer. Once entered, your share will be mounted, enjoy!

General Items of Interest

Text highlighted in yellow is an item added recently.

Linux User Groups:

How to undelete files in Linux:

Hyperterminal alternative in Linux/Ubuntu:

    • Minicom: sudo apt-get install minicom

File Structure Intro:

Linux Manuals:


Linux LVM (Logical Volume Management):


5 Ways to screen cast your Linux Desktop: 5 ways. Popular app – Istanbul.


Desktop Recording:


WIKI’s of note (besides MediaWiki):

  • DokuWiki: Flat file wiki, no DB needed, just PHP and Apache. No WYSIWYG editor. Linux.com did an article on the project.

Media Player worth watching: Elisa.

Audio editor and multitrack recorder: Traverso DAW.

35 of the top Linux distros & how they got their name.

Linux SCREEN command (howto’s for reference):



Recovery of deleted files & directories (resierFS):

Peer to Peer Related (Linux)

Newbish items (Linux Related)


  • Linus Torvalds Interviews: 1, 2.
  • Chmod Notes:


Blogging Tools:




GPS Related:

More Misc:

The first digit is user, then group, then everybody else. The digits are definied below:

7 – Read, Write, Execute
6 – Read, Write
5 – Read, Execute
4 – Read
3 – Write, Execute
2 – Write
1 – Execute

How to speed up SSH logins in Ubuntu clients:

Edit the /etc/ssh/ssh_config file using the following command

sudo vi /etc/ssh/ssh_config

Comment out the following lines (add a # at the beginning of the line)

GSSAPIAuthentication yes
GSSAPIDelegateCredentials no

Note: In Ubuntu GUTSY (7.10), you will want to uncomment all the lines starting with GSSAPI lines (remove the # sign) from the /etc/ssh/ssh_config file, then your SSH login will be instantaneous.

save the file and exit

How to mount .ISO files and interact with the contents over a virtual directory:

This is easy in Linux. First you must make a directory for the mount to exist in (you may create the directory as a non-root user).

$ mkdir /home/username/iso

Then, simply mount the ISO making reference to the directory you just made …

$ mount NameOfTheISO.iso /home/username/iso -o loop

The loop device is a device driver that allows an image file to be mounted as though it were a normal block device, such as a CDROM.

“When you’re done, simply unmount the ISO …

$ umount /home/username/iso

RSYNC Sample command:

rsync -ra --progress --size-only /home/joe/mounts/99_L /media/disk/source


Optimum Online (boost): terms of service. See section 21(b).

General HowTo’s for Reference

Screen & IRSSI, a nice intro & howto:

An excellent write-up on CRON and ANACRON.

Filters and Pipes:

Asterisk HowTo (from scratch):

IRC Channel & User modes document:

How to create .ISO files from DVD/CD’s and access them off your hard drive virtually, without the need to put the disc in again:

Put the CD in, then …

$ sudo umount /dev/cdrom && dd if=/dev/cdrom of=filename.iso bs=1024

You can also do the same with folders:

$ mkisofs -r -o file.iso /location_of_folder/

You can keep all your .ISO files in one directory (called “iso” for example). Assuming you have such a directory where your .ISO files are stored:

To mount the ISO:

$ mount /home/username/iso/NameOfTheISO.iso /home/username/iso/nameoftheiso.iso -o loop

$ mount -o loop -t iso9660 foo.iso /mountpoint (if it was an .ISO made from a CD)

The loop device is a device driver that allows an image file to be mounted as though it were a normal block device, such as a CDROM.

“When you’re done, simply unmount the ISO …

$ umount /home/username/filename.iso

How to mount .ISO files and interact with the contents over a virtual directory:

This is easy in Linux. First you must make a directory for the mount to exist in (you may create the directory as a non-root user).

$ mkdir /home/username/iso

Then, simply mount the ISO making reference to the directory you just made …

$ mount NameOfTheISO.iso /home/username/iso -o loop

The loop device is a device driver that allows an image file to be mounted as though it were a normal block device, such as a CDROM.

“When you’re done, simply unmount the ISO …

$ umount /home/username/iso

If you get a Valicert not Trusted error when trying to use Citrix Metaframe web portal:

Download this (rt click, save link as) valicert certificate file.

Place the file in /home/usename/ICAClient/linuxx86/keystore/cacerts

(cert file downloaded from: here.)

Howto Re-install Grub after windows wipes it out (if installing Windows after Ubuntu in a dual-boot capacity):

If you have a good install of Ubuntu and later decide to install Windows, as a dual-boot, you wouldn’t want to wipe your fine Ubuntu install. If you install Windows *after* Ubuntu, Windows will wipe the Ubuntu boot loader in favor of its own, locking you out of the option to boot into Ubuntu. To re-install the Grub boot loader, which will give you the option to boot into Windows or Ubuntu, do the following.

1) Boot off the Ubuntu LiveCD.

2) Open a Terminal (Applications-Accessories-Terminal) and type in the following commands, noting that the first command will put you into the grub “prompt”, and the next 3 commands will be executed from there. Also note that hd0,0 implies the first hard drive (hd0) and the first partition (the 0 after the comma) on that drive, which is where you probably installed grub to during installation. If not, then adjust accordingly.

sudo grub
> root (hd0,0)
> setup (hd0)
> exit

4) Reboot (removing the livecd), and your boot menu should be back.

5) Open the grub file:

sudo gedit /boot/grub/menu.lst

6) Scroll to the bottom and add the following:

title Windows XP
root (hd0,0)
chainloader +1

Note that you should also verify that hd0,0 is the correct location for Windows. If you had installed Windows on the 4th partition on the drive, then you should change it to (hd0,3), since partition counting begins at 0, the 4th partition on the first hard drive would be 0,3 not 0,4 (that would be the fifth).

How to add your own custom TrueType Fonts to your Ubuntu (or generic) Linux system (the manual way):

There is a great set of fonts that RedHat released to the public called Liberation Fonts, they’re .ttf fonts, so you’ll need to follow the instructions below to install truetype fonts.

Also, to get the free commonly used Microsoft fonts, you don’t need to manually install them. Simply typing this will retrieve them & auto-install them:

sudo apt-get install msttcorefonts

Now, as to the instructions to install any TrueType font (including the Liberation fonts from Redhat), follow these instructions:

1. You need to create a location where your custom fonts will reside. Drop to command line and type:

cd /usr/share/fonts/truetype

In there you want to make a ‘custom’ directory for yourself, so if it doesn’t already exist type this:

sudo mkdir custom

2. Assuming you’ve been downloading your fonts into a fonts directory in your home directory and storing them there, you’ll need to copy them to this new custom folder so that Linux can see them as available to the system.

sudo cp /home/yourname/fonts/*.ttf /usr/share/fonts/truetyp/custom

3. You must ensure that root owns these fonts, otherwise they will not be available to the system, so from within your custom folder, type:

sudo chown root.root *.ttf

4. Now you simply have to reload the font cache so it’s available to your applications:

sudo fc-cache -f -v

Now, your fonts will be made available to your system and applications!

How to create a dynamic proxy via SSH:

SSH can serve as the proxy, allowing you to connect to shell.example.org and make connections from there to an arbitrary server such as mail.example.net. Simply run:

ssh -D 1080 shell.example.org

to make the connection to shell.example.org and start a SOCKS proxy on localhost port 1080.

Standard SSH local/remote port forwarding:

With standard SSH port forwarding, you could enter the command:

ssh -L 2525:mail.example.net:25 shell.example.org

This will forward port 2525 on your machine to port 25 on mail.example.net, by way of shell.example.org. You will then need to configure your mailer to send mail to localhost, port 2525, and use the authentication information for your mail account on mail.example.net.

How to maintain an SSH connection (keep it alive, from the client side):

Add this line to your local ssh_config in /etc/ssh (on the client side!)

ServerAliveInterval 180

How to mount a new hard disk inserted into a Linux system:

Open a terminal window and enter the following commands —

(1) Create the Mount Point

sudo mkdir /nas2

(2) Back up the /etc/fstab file.

sudo cp /etc/fstab /etc/fstab-backup

(3) Edit the /etc/fstab file and add the new partition to /etc/fstab. Since the file is owned by “root”, we need to use sudo to start an editor.

sudo nano /etc/fstab

Add this line to /etc/fstab. Use tab instead of space to separate the various columns.

/dev/sda1 /nas2 ext3 defaults 0 0

To save your changes, press Control-X for save, Y to confirm, and then press Enter to exit.(4) We have made changes to the /etc/fstab file, so let ask Ubuntu to mount the drives again

sudo mount -a

(5)Now give ourselves proper permissions to use the new drive. Assume in this example that my userid is “freddie”.

sudo chown -R freddie:freddie /nas2
sudo chmod -R 755 /nas2

Now the new drive is mounted as /nas2 and is ready to use.One more ThingAlso, as a convenience and ease of use, you can also create a symbolic link (using the ln -s command) on your desktop back to the /nas2 folder. Just click on the new link to open the folder in the default file browser.

How to set up SAMBA on Linux, so Windows can mount shares sitting on Linux boxes:

To install: sudo apt-get install samba

Once the server is install, issue the following command:

sudo gedit /etc/samba/smb.conf

Make the following changes:

workgroup = WORKGROUP

underneath it, add

netbios name = name_of_your_server (no spaces)

For example:

netbios name = kenny_smb_server

Make sure “security” is set to “user” (this will only allow users created on the linux box as valid usernames to mount the Linux shares as opposed to windows-users)

Scroll down until you see “[homes]”, set: (remove the ; and modify the no’s to yes’ and vice-versa, or just leave the ; lines alone and type up your own….)

browseable = yes
writable = yes

Then save the changes.

Finally, create a SMB user, make sure this account exists on your Ubuntu Linux. Take an existing user (or add a new one) that is valid on your linux box and for that username type …

sudo smbpasswd -a username

you will be asked for the samba-specific password for this linux user (this will be the password you will use to mount the shares, instead of the user’s real password on the linux box).

OKAY, you are finished configuring Samba on your Ubuntu Linux.


Now for the Windown side…

There are two ways to access it:

Method 1:

My network places > Entire Network > My Windows Network > Workgroup
You should see a folder call “homes”, click on it, and it will ask you for your username and password, enter your Ubuntu Login Name and whatever you choose for the password when you used command “smbpasswd”. You should be able to take it from here.

Method 2:

In start\run type in “\\[whatever you named the Samba server]”. From my example above, I used “\\kenny_smb_server\”. You can also hit it by IP .. \\\name-of-linux-user (will mount their Home).

Keep in mind that you are sharing, /home/[linux login name]/*

How to mount windows shares from Linux, permanently (requires entries in /etc/fstab):

How to reset Gnome settings to defaults if messed up or if Gnome doesn’t load-up or look right:

If you don’t have access to your graphical (GUI) desktop to delete these folders in Nautilus or you’re stuck at the login screen, drop to a terminal by hitting CTRL + ALT + F1, login to your account, and run this command:


rm -rf .gnome .gnome2 .gconf .gconfd .metacity

Get back to your GUI desktop by hitting CTRL + ALT + F7.

This will not fix any video issues or Xorg.conf issues, just Gnome-specific issues.

How to format a drive with XFS: (Excellent link with some instructions).

1. Dismount the drive

2. sudo apt-get install xfsprogs

3. sudo mkfs.xfs /dev/sdc1 -f (where -f is FORCE write), wait a few seconds as it writes the sectors.

4. To label a HDD with a label name, simply type, “sudo xfs_admin -L media /dev/sdg1”. where “media” is the label name, and “sdg1” is the device.

5. Mount the drive (sudo mount -t xfs /dev/sda1 /mnt/blah) or powercycle the external drive or computer to allow Linux to auto-mount (not applicable if it's an OS boot drive), applicable if it's an external USB drive.

6. sudo chown -R username:username /mount/point (allows you to write to the drive as a non-root user).

7. To set up automounting of the drive in Fstab, you’ll need the USB drive’s UUID tag. Find this by typing:

sudo blkid device

where device is the /dev entry for the partition you want to know about.  for example,

sudo blkid /dev/hdc3

This will return the UUID of that drive (whether mounted or not). You can also just run “sudo blkid” by itself and it will give you the UUID tags for all your devices.

8. Modify your /etc/fstab file: (gksudo /etc/fstab) and add the following at the bottom

UUID=23d3ccfa-8c35-3638-c6f2-6c5b4231d5bd    /media/2TB_2ndary    xfs    defaults    0 0

This assumes the drive was formatted with XFS, of course. Just put <tab>’s between each section.

9. Step 8 will mount the drives on a reboot, but if you want to mount them manually, just type “sudo mount -a” or “mount -t xfs /dev/sdx1 /media/mountdir” where sdx1 is the actual device of the drive in question and /media/mountdir is the directory you’ve chosen as a mount point.

How to install Adobe Reader on Ubuntu Hardy 8.04

1. Click here.

How to reset a lost password in Linux

1. Click here.

Article on SafeSQUID (content filter-based proxy module)

1. Click here.

How to set up a PPTP (VPN) server compatible with MS boxes on Ubuntu:

1. Click here.

How to ENCRYPT a plaintext file into ascii-armored text:

1. gpg –text –armor -c ./filename (where “filename” is the name of the file to encrypt).

Manage Windows Remotely From a Linux Command Line Interface

Many administrators must work in multiple operating systems, such as Windows, Linux, Solaris, Unix, etc, one of the most common among them being Windows.

As I’ve often stated I prefer Linux, using Windows only when I must. However, from an administrative perspective it’s really helpful if while using Linux for various administrative tasks, one can streamline their work environment and engage in some common tasks such as starting or stopping Windows services at will from Linux.

To get a list of all available services on a Windows PC or Server, type the following from your Linux command line:

net rpc service list -I IPADDRESS -U USERNAME%PASSWORD

If you have a complicated password that uses symbols (such as ! # @, etc) you will find that entering the password (even in “quotation marks”) will not work, you will have to leave the @PASSWORD blank, and just enter the USERNAME, you’ll then be prompted to enter the password manually. Also note, some services may have spaces in their name. If so, simply “enclose the service name in quotes” to start or stop that service.

If on a domain . . .

net rpc service list -I IPADDRESS -U "domainname\username"

You will then be prompted for the password. Once you’ve authenticated, the list of services will scroll on your screen.

To stop any service:


or if on a domain

net rpc service stop SERVICENAME -I IPADDRESS -U "domainname\username"

To start any service:


or if on a domain

net rpc service start SERVICENAME -I IPADDRESS -U "domainname\username"

You can do more than stop or start services. This functionality stems from SAMBA on Linux. You can add and remove users remotely, change user passwords, kill print jobs, show all users for a specified group, list all groups, shutdown the server or PC, shutdown-and-restart the server or PC and much much more: just type “man net” for more information, however, here are a few gems . . .

To list all the shares on a PC or Server (example):

net rap share -I -U "mydomain\john"

To list the Print Queue on a PC or Server (example):

net rap printq -I -U "mydomain\john"

To get the name of the server you’re accessing (example):

net rap server name -I -U "mydomain\john"

To list ALL the open SMB/CIFS sessions on the target computer (example):

net rap session -I -U "mydomain\john"

To reboot the server or PC and force all apps to shutdown gracefully:

net rpc shutdown -r -f -I -U "mydomain\john"

These commands can easily be scripted with or without variables (for the IP addresses) to speed up the process.

BleachBit: keep your system tidy and clean

An extremely easy to use application, BleachBit will scan your Linux system for thumbs.db files, system and various application cache directories, old log files and will also wipe empty space if you so choose to ensure privacy. It is aware of many applications and knows exactly where their cache files are located. I found it not only reclaimed a good chunk of disk space from hundreds of .DS_Store files, and Thumbs.db files, but many cache files from programs I had since removed many months ago. (It also runs on Windows).

From their home page:

BleachBit quickly frees disk space, removes hidden junk, and easily guards your privacy. Erase cache, delete cookies, clear Internet history, remove unused localizations, shred logs, and delete temporary files. Designed for Linux and Windows systems, it wipes clean 70 applications including Firefox, Internet Explorer, Flash, Google Chrome, Opera, Safari, Adobe Reader, APT, and more.

It is available for most Linux distributions. Here is a great write-up on it from Linux Magazine.

Remixable textbooks by peer-reviewed authors for community use in education.

I came across an interesting service for educators and students: Flatworld Knowledge. Creative Commons licensed textbooks for students, allowing professors to edit and adopt textbooks to their own needs and requirements. Also a lot cheaper than classic textbooks, these are available for reading online, or for low priced printing in hardbound editions, or printable via PDF. The texts also include teacher supplements such as instructor manuals, lecture slides and tests.

Once a professor has chosen to customize a textbook, it gets a unique URL allowing students of the class in question to download or publish on-demand the customized textbook. I found the subject catalog a bit limiting right now, but I would expect that to grow over time. This site is still worth examining if one is an educator or student looking for community driven, affordable teaching/learning materials. Some authors also put out podcasts on their books, accessible from the site. I’m also sure Flatworld Knowledge would enjoy hearing from some who are interested in writing a textbook of their own for peer review and publication by them.

I noticed there aren’t any textbooks about computer science: perhaps some out there is willing to change that!

Easily find your hardware specifications (and some system monitoring commands) in Linux

When a PC or server is running Linux, you often want to know exactly what sort of hardware is actually running inside the box and more importantly whether it is supported by the kernel. Here is a list of commands which should help you to learn about your system and some of its specifications. In some cases, these commands may not work as listed below if you’re running a Red Hat or Fedora based distribution. In those instances simply specify the path to the command which will be /sbin/command.

If any of the output runs off your screen, just add |more to the end of any of these commands to see the output one screen at a time and hit the spacebar to go to the next screen, or Q to quit.

Processor type:
$ cat /proc/cpuinfo

Is the processor using 32 or 64 bit instruction set:
$ cat /proc/cpuinfo | grep flags | grep lm
If you get some output you have a 64 bit CPU. If you receive no output, then you’re using a 32 or even 16 bit CPU. The reason this is the case is that the CPU yields many flags that tell Linux what sort of processor it is, and the lm flag informs Linux that the CPU is a 64 bit processor. Grep as a command filters output. Feel free to run this command without the grep suffixes (cat /proc/cpuinfo) to see the full output of your CPU details.

What hardware (audio, video, disk controllers, etc) is in my Linux box:
$ lspci -tv
(The -t switch groups similar devices together for easy reading and -v offers more verbosity.)

To easily filter out the above command to just show graphic card information:
$ lspci | grep VGA

What USB devices are plugged in:
$ lsusb

Check the size of the hard drive and what hard drives are available in the system.
This command will also list USB drives and sticks. You need a root permissions to execute the fdisk command:
$ sudo fdisk -l | grep GB

Show info about a particular hard disk including firmware revision (replace sda with the appropriate drive as listed from the above command):
Note: This will only work on internal disks, NOT USB drives.
$ sudo hdparm -i /dev/sda

Check what partitions and file system is in use on my hard drives (same as the above command, but essentially more verbose):
$ sudo fdisk -l

Locate CD/DVD-ROM device file which offers a CD/DVD-ROM’s make and model info:
$ wodim –devices
$ wodim –scanbus
The above command will scan your entire system bus for attached devices (this won’t include USB Devices as they are not direct-bus-attached devices).

What modules are currently loaded:
$ lsmod

get a information about any particular module:
$ modinfo module_name

remove modules:
$ modprobe –remove module_name

load a modules to the kernel:
$ modprobe module_name

What hardware is using which module.
The -v switch is for vebosity, where -vvv is EXTRA verbosity.
$ lspci -v
$ lspci -vvv

Check for PCMCIA cards:
$ lspcmcia

How much RAM is installed in my Linux and how much of it is in use (megabytes).
It will also include swap memory:
$ free -m
There is a gigabyte switch, but it *rounds* it down, so it isn’t very accurate for RAM info:
$ free -g

Check sound card settings. This command will reveal whether your sound card is installed and what modules are in use:
$ cat /dev/sndstat

Available wireless cards:
$ iwconfig

What speed is set to FANs:
$ cat /proc/acpi/ibm/fan
If this command doesn’t work, then feel free to peruse the /proc/acpi directory on your system. You will find info available on your CPU, AC Adapter, Battery, etc. Some info is available here, and your mileage may vary for viewing any of the files in /proc/acpi.

Get a battery information on your laptop (assuming it’s been installed):
$ powersave -b

To find out what Linux Kernel you’r running:
$ uname -a

To find out what distribution of Linux you’re running:
Run any of these commands, as depending on your distribution some may or may not work.
$ cat /etc/issue
$ cat /proc/version
$ dmesg | head -1

Get a recent history of system reboots:
$ last reboot

To open any file from command line using the default application (will launch the correct graphical application for the file, as though you had doubled-clicked the file graphically):
$ xdg-open ./filename

To monitor all active network connections, and update live every second:
$ watch -n.1 ‘netstat -tup’

To passively list all connections, active or inactive:
$ ‘netstat -tupl

For more info on system monitoring tools (and there’s a lot) try this as a first stop.

Easily save any Flash Video to your local disk

Web based, easy to save videos from sites like Youtube, Dailymotion, Metacafe, Veoh, Flickr, Google, Blip.tv.


Saves the movie as a .FLV (Flash Video) file.

Beware of your photocopiers!

Many people don’t know that there are hard drives in many photocopy machines today, especially in any office style photocopier made within the last 5 to 7 years. These hard drives often retain scans of old documents. This matters when an office disposes of an old copier, as it’s been a treasure trove for identity thieves and other busybodies. Whether at the office or at a commercial copy storefront like Kinko’s or Staples, copies of your private documents stored on public machines for an indefinite period has some obvious drawbacks. Here’s an article posted 3 years ago on the topic, and one posted about a week ago — not much has changed.

Current photocopiers can produce copies very rapidly because they scan the page only once and store it digitally on its internal hard disk. It uses that image file to then print copies using similar technology found in laser printers. Indeed, many copiers today can function as a direct printer for your PC (or even e-mail your document directly from the copier) which requires a network connection; this means many units can be addressed remotely and is therefore vulnerable to remote perusal.

For personal and private documents, a personal scanner & printer (at home) might be the wiser choice.

Command Line Magic: Part III

As part of my continuing Command Line Magic series and many of the other Command Line oriented posts I’ve made (click here for category-summary of Command Line oriented posts, or just click the Command Line tag in the tag cloud to the right), I’m happy to post another set of highly useful commands. As always, the context of these commands are within the Bash shell in Linux. A moderate understanding of Bash shell commands is required to fully appreciate this post.

Here are some very useful commands, that any power user would find helpful:

1. Start a simple webserver to serve up any directory as browsable from anywhere (for file transfers):

$ python -m SimpleHTTPServer

I’ve mentioned this in past posts. This is a simple command, that when run from any directory will launch a simple python web server that will serve up the local directory as a browsable directory using a browser such as Firefox or Chrome. Any subdirectories underneath the local directory from which this command is run will also be browsable. You can right-click and save any file or left-click it to attempt to view it on the fly. This works very well over SSH sessions, when you want to transfer a file, but don’t want to engage SSHFS or SCP. You can background the process with a ctrl-zbg, then pkill python to stop the web server from running, or just leave it running in command prompt and ctrl-c to end it.

2. Record your desktop and pipe the output to an mpeg file.

ffmpeg -f x11grab -s wsxga -r 25 -i :0.0 -sameq /home/john/desktop.mpg
  • -f allows ffmpeg to grab the data properly from the x11 framebuffer
  • -s sets the size of the screen to actually record, starting from the upper left of the screen. Here wsxga denotes a specific preset resolution (in wsxga’s case that would be 1600 x 1024). You can however type any resolution you like in manually (e.g. -s 1024×768). You will need to know the resolution of your desktop to set this correctly.
  • -r sets the framerate. This could be left out as 25 is the default.
  • -i sets which framebuffer to take, since XWindows can run in multiple sessions, generally you’ll want to leave this setting alone.
  • -sameq forces the same quality was what is being fed in by the source (in this case the x11 framebuffer). This is helpful to have a max-quality video, though you may want to try other settings to degrade the quality to keep the file size down. If you’d prefer to reduce the quality on the fly, replace -sameq with -qscale x where x is 131. These are preset quality settings, with 1 being the highest and 31 being very poor video quality. I have found -qscale 10 to be the sweetspot between quality and file size.
  • If you’d like the file to be a bit smaller and if you prefer an .AVI to a raw .MPG, then simply remove the /home/john/desktop.mpg in the command above and replace it with:
    • -vcodec mpeg4 /home/john/desktop.avi
      • This is file will be a bit smaller using the mpeg4 codec in an avi container. You can still use the -qscale option with this change.

3. Copy an entire directory tree through ssh using on the fly compression through an SSH session (no temporary files!):

$ ssh <host> 'tar -cz /<directory>/<subdirectory>' | tar -xvz

Just enter the <host> to SSH to, and the host’s <directory> and <subdirectory> path to compress that subdirectory on the fly at the host, but decompress it as it arrives locally to your current location and path. This will have the advantage of not taking up any extra space at the host (since the files are compressed as they’re transmitted) and easily drops the entire directory tree specified onto the client uncompressed, saving time and bandwidth and transmission time.

This works well for large directory trees and is easy to use for a quick copy where you don’t want to spend a lot of time compressing it at the host manually and transmitting the compressed file, then uncompressing it, then deleting the original compressed file created at the host. Note: This will replicate the full directory path at the client side (desired).

SCP or RSYNC are recommended for automated backup though, this is more appropriate for a 1-shot copy of a large directory.

4. Resize any image files in the current directory to Width x Height specifed (regardless of image format)!

$ for a in `ls`; do echo $a && convert $a -resize <Width>x<Height> $a; done

Simply do a man convert to learn more about the convert program, other options can be added into the command. Also this is a great syntax for doing ANYTHING to any files in a particular directory that would be a batch process consistent with all the files in that directory.

5. Grab a screenshot of the current desktop to the current directory

$ import -pause 5 -window root desktop_screenshot.jpg

This command will wait 5 seconds (assuming you want some time to set up the shot and to get the command prompt out of the way) and take a snapshot of the root (primary) desktop currently running. This command requires imagemagick be installed.

There’s no defense like an obvious (read passive) offense . . .

. . . So Bill Gates & Steve Jobs both threatened to sue the former standard bearer of the Open Source movement, SUN Microsystems (before it was whisked away from us by Oracle). The CEO at the time was Jonathan Schwartz, who by waving the banners of Unix and JAVA in front of both Bill and Steve forced them to stand down.

Being an obvious proponent of Open Source (also known as FOSS), I generally use only Linux and Open Source software. I own an Android phone, my home machines run Linux and wherever possible I try to deploy Open Source software professionally where possible & appropriate. I’ve never owned an Apple/MAC or an iAnything. In fact, my Sansa e280 media player runs Rockbox, the Open Source Jukebox Firmware for media players instead of the closed source software shipped with it. I have owned Windows systems and played with DOS in my youth, but once I reached the age of liberation I made a conscious choice to walk down the less trodden path and have reaped the rewards for it.

We’re all interdependent and this fact is ignored by many. Both Microsoft and Apple deny that the very foundations of their closed source products are rooted in the collaboration of the community, rooted in Free and Open Source Software (FOSS). Indeed, Apple’s OS is based on FreeBSD, while .NET (Microsoft’s primary application framework) is clearly drawing its inspiration from JAVA.

UNIX is one of the seminal operating systems which in many ways has influenced the world in which we live and I contend, moreso than Apple or Microsoft. In some of its core applications under the hood, some Windows code is based on FreeBSD. Simply click here for an example of which there are many, or this link, or this link. Although not a majority of it; I wouldn’t want to demean BSD by drawing parallels between the two <smirk>.

Of course as we all know, Apple is based on FreeBSD. Mac OS X is based upon the Mach kernel, parts of  FreeBSD’s and NetBSD’s implementation of Unix were incorporated in Nextstep, the core of Mac OS X. See this link for more info (Wikipedia).

Internalizing these facts in consideration helps me to realize that raw creativity, intelligence, community and ingenuity can provide great fulfillment, certainty and happiness in many spheres.

Having said all of the above, reading this article brought a smile to my face. It is a summary of Jonathan Schwartz’s blog post which can be read in its entirety here.

Load more