Cheese Making Crash Course, the first year of lessons

For about a year now I have been making cheese. At first there were many failures. However, slowly over time, I figured out how to get past the mistakes. This is a run-down of a years worth of trials and tribulations.




Firstly, it’s all about the milk. The milk will dictate how good the cheese will be on the other side, and how easy it will be to get there. Never buy something with ultra/extra homogenization as it will be impossible to make cheese or butter with. Obviously, it is best to start with store bought milk because finding a source of raw milk can be a chore, and you don’t want to waste ‘good’ milk


There are two types of milk that are generally easy to obtain, cow and goats milk. There is the easy way to obtain either, and the hard way. The easy way is to go to the store, where cows milk is extremely easy to find. Goats is somewhat difficult to find from a store, but not impossible.


Raw milk (or pasteurized but not homogenized) is superior in every way to store bought milk, in respect to cheese making. It is going to take you some time to find a source for raw milk. Generally, you have to find a farmer through word of mouth. Laws vary from state-to-state, so it takes some time to locate a source. For example, in California you apparently have to be the owner of the cow in some way.


The price for a gallon of milk in-the-raw is usually about five dollars, or an ounce of silver for 4 gallons, depending on who you are dealing with. I have seen raw milk in specialty stores as high as $12 a gallon.


  1. Store bought cows milk is easy to obtain, but the homogenization makes it difficult to get a good quality cheese out of it. Calcium Chloride usually helps with this difficulty, but it can be done without it if you are very careful.
  2. Goats milk from the store is also usually homogenized. However, it is slightly easier to get a curd out of homogenized goat than cows milk.
  3. Raw cows milk is sometimes easier to obtain in volume.
  4. Raw goats milk will usually produce a curd easier than cows milk. However, some people do not like the taste for some reason. Some will like it more (myself included) it is very distinct.




We are all on a budget, so the money we can spend is limited by the possibility of success. Time is also a factor. There is also the need vs want aspect to consider. This list goes from the most essential, to the least essential, with an eye for budget considered.


1.) A thermometer is essential. I cannot tell you how many failures I went through trying to avoid spending three measly dollars on a thermometer. The thermometer must have good sensitivity around 90-130 deg F (30-55 deg C). If you are just starting out a candy thermometer might work, maybe.
Eventually, I ended up buying one of those digital thermometers that beeps at a specified temperature. However, it is best to put this investment of some ten-twenty dollars in capital equipment off until it is needed. If you leave the pot, and the temp goes to high things might not turn out optimal. The temperature is key in making most cheeses. Believe me, I tried everything to avoid buying a thermometer, but the temperature is very important.


2.) A muslin cloth is a pretty cheap thing, but is usually only sold at speciality stores. A muslin cloth is about the price of a couple of rolls of paper towels, but unlike the paper towels it can be re-used after a wash in clean water. Do not put a muslin cloth through the laundry. I tried everything. I used a trimmed/cut cotton t-shirt, that I had to repeatedly wash in clean water because detergent is not good for cheese. I used paper towels. I tried hand-towels!


If you try to use an old t-shirt, plan on washing it in a massive amount of water or throwing it out. The cheese will often stick to the fabric and not come off. Also, you can’t wash the t-shirt in the laundry because detergent in your cheese is not good. Don’t use a towel, as it will separate into the brick of cheese and leave lint in the cheese. However, if you wrap the cheese brick in paper towels, a hand/normal towel it’s okay. This is because the paper towel will peel off, and the towel will absorb the moisture/whey without depositing lint on the cheese.


3.) Obviously, any pot that can hold 1 gallon of milk will do. However, a gallon of milk makes roughly a fist of cheese. This is a horrible amount of effort for just a fist of cheese. The reason I previously mentioned the ‘ounce of silver for 4 gallons of milk’ is that a typical brick of cheese is made from 4-8 gallons of milk, and sometimes people don’t want cash. Once you have made a few batches successfully, and feel comfortable that you are not going to botch things, a good 5+ gallon (20+ quart/20+ liter) pot is going to make thing worthwhile.


4.) A colander is obvious. It’s not always necessary, but it can make life real easy. The usual task is lining the colander with the muslin cloth and pouring the curd into the cloth while the whey drains off.


5.) A cheese press is for the pros. I don’t even own one yet. You can normally get away with weights, plates, and all sorts of other tricks to ‘press’ the cheese. I will be buying/building one when I move on to the more advanced cheeses.


6.) A lot of recipes call for ‘cutting the curd’ in to 1” squares. At 1-gallon this can be done with any normal kitchen knife. However, at 4+ gallons a stick/knife that is longer may be required.


7.) It is important to have a ladle with holes, to drain curds from whey. You can probably get around this with a normal spoon/ladle. The annoyance will compel you to purchase a slotted spoon.


8.) A mortar and pestle helps with grinding up dried tablets and various spices. Some rennet (see below) comes in dried tablets. These tablets must be ground finely before mixing into the heated milk. A good, really sweet stone mortar and pestle may run as much as 50 bucks. However, the back end of a kitchen knife is sufficient to grind it up properly.






There are some pretty special ingredients and some not-so special ingredients. Getting familiar with them is going to happen sooner or later. Not every one is needed for every type of cheese. Also, many of the difficult cheeses have extremely specialized ingredients. So I’ve left out the ones I have not figured out yet.


Rennet, sometimes called coagulant. It’s function is to separate the protein from the whey/lactose. There are two major varieties, vegetable and calf. There are both available in dry and liquid versions. Dry varieties can store in freezers for years. Liquid is preferable, but lasts maybe 6 months in a refrigerator. Generally, regulations require that rennet be standardized to the amount of milk it will work on. Rennet is an enzyme that curdles the milk so as to separate the cheese curds from it. Very few cheeses do not call for rennet. Expect a handful of this stuff to effect some 25 to 300 gallons of milk.
Calcium Chloride is useful when attempting things with store bought milk. However, it is not a cure all. For example, I don’t know if it helps with mozzarella. I just refuse to use it. However, it would have been really handy if I had not been so stubborn in the beginning. It can be purchased in liquid or solid form, and should be generally used in small quantities. Also, both liquid and solid forms have really long shelf lives. It’s basically a salt that aids rennet in coagulation of store bought homogenized milk. However, it is necessary in some recipes even with raw milk.
Citric Acid is key in making mozzarella (read more below). Powdered citric acid is far superior to the alternatives; vinegar, lime juice, and lactic acid powder.
Cheese salts/cultures come in two types Mesophilic and Thermophilic. Depending on what you’re doing these types of cultures may not even be necessary. Mesophilic grow in colder (generally what you would consider room-temperature), and thermophilic grow in warmer temperatures closer to bath water. I tried my best to intentionally remain ignorant of the difference and uses for as long as possible. We are talking about bacteria that are used to generate the flavor in cheeses here, so the knowledge of exactly what is going on is very subtle. Mesophilic bacterial are generally for wait-a-day and grow-at room-temp type cheeses. While thermophilic bacterial are for heat it to 95F/35C and wait-a-hour type cheeses. My personal opinion is screw subtlety and below I’ll explain in my ‘Hacked Parmesan’ recipe. A good piece of knowledge is that any regular ‘kefir/yogurt’ starter package from a speciality store is a mesophilic bacteria if it sits at room tepm, and a thermophilic if it sits in a heater (by the instructions). Also, most Italian cheese starter kits come with thermophilic packets.
Salt/Brine is useful in various circumstances. There is normal table salt, and it’s so damn cheap that you can buy a truckload for a Benjamin. Who wants to do that? There are some other considerations. Table salt comes in iodized and non-iodized forms. This is because if you have zero iodine in your diet you form huge goitres, but if you have to much you have an increased risk of cancer. If there is a radiation source nearby then iodine inhibits it’s carcinogenic effect. I could go on, but there is table salt (iodized or not), sea salt, sea water, brine, and a few others. The whole point of a brine, or salt bath is to get a cheese ready for aging or curing. If you’re a newb like me, just pour a ton of salt into some water and give it a try, just make the water as-or-more salty than sea water. Occasionally, some cheeses need a specific brine in the bathwater prior to the cure, but hey….
Cheese wax, what a scam man. I could be wrong, but this is for selling your cheese and good luck with that. It appears to make things last a whole lot longer with the aging stuff. However, the real purpose appears to be sales. Don’t trust my opinion here, but the wax is completely not necessary for home use because the feds will slam your ass in jail if you try to sell your cheese. However, if you want to make your cheese last for years and give it to an ‘in-county-in-state’ relative it could be pretty cool.
Annatto a dye for making cheese look good. I refuse to buy this stuff, because it has nothing to do with taste. It imparts the normal yellow color to cheddars and other cheeses.
Sugar a sweet tip is that if you feel you are short on your sharpness with a particular cheese salt, simply add a pinch of sugar. It will accelerate the production of bacteria. I’ve heard that one can make a cheddar sharp as a razor. However, that might be an industry secret cat-out-of-the-bag.





Mozzarella in 30 Minutes is Impossible


Mozzarella is a ‘fresh’ cheese meaning that aging is not required, and it does not take days to produce… usually. There are recipes that claim that mozzarella can be made in 30 minutes floating around out there on the interweblinks. However, those recipes are for suckers like me. I have yet to figure out how to do it that fast. I’m pretty sure I can get it down to 45 minutes if I cut some corners, but 30 minutes is for the pros. The most obvious reason to start trying to learn how to make cheese with mozzarella is that it does not have one of those steps that says ‘wait a day’ or ‘wait 3 weeks’. Some cheeses need to be aged. Albeit quality demands that these cheeses be aged, but our budgets and time demand that we make things in a timely and courteous manner for our meals. Here is the 30 minute mozzarella recipe modified for people who might be as incompetent as myself. I call it the… 1 hour mozzarella for incompetents recipe.


*adjust recipe per amount of milk


1.) Dump in a ¼ ounce tsp of citric acid powder into 1 gallon of milk and mix (no hurry, any point up to 90F)


2.) Get 1 gallon of milk to 90 degrees Fahrenheit.


3.) Dump in the ‘right’ amount of rennet for 1-gallon (do some research). Briskly stir the stuff in, like you want it. Don’t mind the crazies that tell you how to stir it in, or only to use a wooden spoon. Just make sure you don’t just drop the tablet in… it needs to be ground up first. Liquid rennet is superior because it does not need to be finely ground in this step.


4.) Set aside for 30 minutes to 12 hours (it really does not matter, I did it for 2 days once and got a good result).


5.) Do the 1” criss-cut and heat to 120-130F. (I’ve found that the criss-cut is actually a short-cut that is not entirely necessary if you are willing to heat the mass over the course of 3 hours adjusting time up roughly an hour for each gallon.


6.) Massage the whey out. This step is the most difficult to describe. The goal is to get all of the whey out of the curds. I typically pour the pot sideways gently squeezing the juice out of the curd mass. I don’t care how you do it, gently squeeze, wrangle, and slap that thing into a big dough ball! This part is nasty and careful, and is best done on a cutting board sloped into a sink. Maybe you need slotted spoons, a colanders, and whatever. The goal is to get a hunk of semi squeezed cheese. The start should be some kind of soft curt, the end result is an actual slap-able mass. Just figure it all out for yourself.


7.) If you want you can freeze it here, but thaw it to room temp before the next step.


8.) Add salt to taste, like salting eggs this is a very sensitive. Garlic, jalapeños, and other spices can be added at this step, but too much salt will ruin a good thing.


9.) Get someone with temperature-insensitive hands to stretch and mold the mozzarella. A fist/gallon is maybe 30 secs in a microwave. This step is the one that kills most mozzarella, it takes patience and hard hands. The idea is to somewhat melt the mass in a microwave. The old school method is to use boiling saltwater (and skip the previous step). However, a microwave makes thing a lot easier.


10.) After the mass is stretchable, store in cold water for up to 8 hours, or serve immediately for best results.



Rebel Parmesan the Corporate Way



Normally, Parmesan is produced using thermophilic cultures. This recipe will result in parmesan much like the kind in the store bought shaker, but much better tasting. Obviously, it is not real parmesan, but neither is the kind in the shaker you are buying. The real kind is still the best, and the store bought kind is still cheaper. However, this is a way to cheat the rules. It’s a true cheat because mesophilic cultures are not supposed to be used to create parmesan. Oddly, you should expect to yield about two/three knuckles of cheese from a gallon of milk. Time is not pressing here, but this is the only recipe where time has no matter. Each step has no real time dependency. It’s extremely hard to mess it up when you know how to do it.The reason this recipe is a cheat because normal parmesan is produced exclusively using thermophilic cultures.


1.) Dump a normal kefir starter package into a gallon of milk. Any mesophilic starter package will do, as long as it is not intended for a yogurt warmer (thermophilic). In fact, because of time the quantity of milk really does not matter. Set the container aside at room temp for a day….


2.) Wait 12-36 hours for a kefir/yogurt like substance to result. Usually, you will have to wait some half day longer than normally expected for the kefir/yogurt/whatever, as you want the bacteria to fully develop. The process is highly temp-time driven, but there is a fine line between fully developed and spoiled. TRUST YOUR NOSE!!!! IF YOU THINK IT HAS PASSED TRY AGAIN!!!!!!!!!!!!


3.) Dump the yogurt-kefir into a muslin cloth. The substance should look almost exactly like yogurt. Hang the muslin cloth like a bag. Usually from a cupboard door. The whey should drain out over the course of a day or three. We are in no hurry!


4.) When the muslin cloth is capable of separating from the mass, this is the time to move. If the mass does not separate from the muslin cloth, it might be appropriate to throw it out or try a new strategy. We want a somewhat dry lug/brick. Larger volumes (2+ gallons) might have a separable mass but a squishy center. The mass should feel solid enough to toss lightly at a minimum. The goal is to get about half/quarter of a fist per gallon here, so moisture must be forced out.


5.) Pressing might be completely unnecessary. However, if there is a squishy center or the mass does not feel solid, then give it a day press. The goal is more to dry it out, so if you do not own a press do what you can to absorb/dry/squeeze the moisture out of the thing. Use weights, paper towels, plates, etc.


6.) Take the solid hunk and leave it in saltwater/brine for a day. Make sure the water tastes saltier than seawater. It’s okay to leave it in the brine for 1 day or 1 week if you’re crazy. You may choose to cut the hunk up before brining. This makes things pretty simple.


7.) Remove the pieces from the brine. Let dry for 1-15 days. It is okay to let them dry on a plate in-between paper towels, or even a hand-towel. These chunks should look/feel like little nodules by now.


8.) Once the hunks do not appear to be moist or produce oil, break them up into finger sized chunks and store in a jar in a refrigerator. If the hunks of cheese are playable something is horribly wrong. The chunks should break like chalk and store for more than a year without spoilage in refrigeration. If a hunk gets a dark color on it, it is spoiled and it’s whole jar is. DO NOT GRIND PRIOR TO STORAGE.


9.) To serve grind up a small hunk, easier said than done. It will taste extremely similar to store bought parmesan, but better. Although, it is better than store-bought, it’s just not as good as the real parmesan.



I’ve been fiddling around with raspberry pi for several months now (off and on). I have a few things to share. This little computer is awesome. It’s not exactly what you think it is, but in a way it is close.


To start off Raspberry Pi is everything you thought a smart phone would be before you got a smart phone. I don’t know why but I’ve always wanted a geiger counter in my smart phone. Since I couldn’t get one, I stopped using smart phones. However, if you want a geiger counter in your raspberry pi there is nothing stopping you.


More practically Raspberry Pi is a computer powered by a cell phone charger. On top of this small computer there are thousands of things that are far closer to what a triquarter (from star trek) can do than a smart-phone. Let’s just say you are locked into an Apple IIe iPhone and want a computer that isn’t so locked down like an IBM compatible Pi so that you can actually do the things you want to. To make that awful sentence clear I don’t own a smart-phone because it doesn’t do the things that I want it to.


The Pi can (with great effort) do the things I want it to. There are already a lot of lists you can find of ‘ideas’ people have about what to do with a Pi. These lists are for things that other people have already done. Oddly, these lists are close to what I thought a smart phone would do before I got one… I was an early adopter, but eventually gave up on smart phones.


Some examples of things I want my phone to do:

geiger counter

measure toxic chemicals in the air

air oxygen/nitrogen/co2/etc levels

test metals for conductivity



radio scanner

Partial Augmented Reality


Okay, so it wouldn’t be much of a stretch to say that Raspberry Pi does all of those already. However, it will be extremely extremely extremely difficulty for you to figure out how to get it working. Don’t worry though. It will be pretty cheap… just very time consuming.


I made this figure:


$25-$50 circa 2013


Turning Raspberry Pi into a Spycam using just a webcam (my first adventure with pi)

Start by going to http://www.raspberrypi.org/downloads and downloading the Noobs. Download the formatting tool and use it to format the SD card. To install one of the images Win32DiskImager will be needed. However, NOOBS does not require the disk image tool. Place the unzipped files on the SD card.


After booting it, the NOOBS will give you the option to install various different operating systems choose a flavor [raspbian for this tutorial]. The installation process should take some amount of time. It will reboot after installation and ask about a few more options in raspi-config. If for some reason you would like to change these options later:


sudo raspi-config


Ideally do not turn on the HDMI-GUI output “boot to desktop”, this saves just a little bit of system resources (but not much). There may also be an option to enable SSH, if so enable it. If you choose not to set a new password in raspi-config you can do it later with:


passwd pi


However, it’s wise to set a new password as soon as possible.


First off we need to make sure that openssh is working, so that we don’t have to type everything in. Putty allows us to access the command line from a remote computer as long as it is plugged into a local router or connected through a local wireless router. Later we will go over connecting wireless, but for now it might be best to have Raspberry Pi connected directly into a router. This way you can configure it from a desktop/laptop.




The following command will give you a printout where you can find your IP address on the local

network provided by your local router. Use this IP address to configure Putty for connecting to Raspberry Pi remotely. Usually, the port is 22.




Upon first connect with Putty it will ask you if the public key is acceptable to you. If want to check if it is go back over to Raspberry Pi and type in this to verify it:


ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub


Since you are on your home router it is unlikely that anything is out of order here. Usually people just accept it without checking. If someone was man-in-the-middle attacking you the numbers would be different. The long string of letters and numbers should be exactly the same.


Once verified and connected these two commands are for good measure, they update the software packages and upgrade them. Also, it shows you that your openssh-Putty connection is working well. It would be appropriate to disconnect the monitor now, because we won’t be using it anymore. Everything from here on out will be done over the network.


sudo apt-get update && sudo apt-get upgrade


Once Putty is working and updates/upgrades are done we need to install motion.


sudo apt-get install motion


There are quite a few options for motion, it is highly customizable. It may seem daunting at first, but it’s not terribly complicated. We want to take advantage of all the customizable options. Read this document a few times to get familiar with it: http://linux.die.net/man/1/motion


To edit the .conf file:


sudo nano /etc/motion/motion.conf


Alternately if you would like to use a file other than the default provided:


sudo cp /etc/motion/motion.conf /etc/motion/motion.conf.originalbackup

sudo rm /etc/motion/motion.conf

sudo nano /etc/motion/motion.conf


Now copy/paste in your desired .conf file. To make it easy for a single webcam setup this is a good /etc/motion/motion.conf file. It is what we will be using for this tutorial.


# Advanced single webcam /etc/motion/motion.conf file

# credit to Phillip Moxley

# modified from original generated file most helpful comments are removed


daemon on

process_id_file /var/run/motion/motion.pid

setup_mode off


#Capture device

videodevice /dev/video0

v4l2_palette 8

input 8

frequency 0

rotate 0

width 640

height 480

framerate 2

minimum_frame_time 0

netcam_tolerant_check off

auto_brightness off

brightness 0

contrast 0

saturation 0

hue 0


#Round Robin

# This is for multiple webcams on the same device

roundrobin_frames 1

roundrobin_skip 1

switchfilter off


#Motion Detection Settings:

threshold 1500

threshold_tune off

noise_level 32

noise_tune on

despeckle EedDl

smart_mask_speed 0

lightswitch 0


## These motion detect settings are important for our purposes


# Picture frames must contain motion at least the specified number of frames

# in a row before they are detected as true motion. At the default of 1, all

# motion is detected. Valid range: 1 to thousands, recommended 1-5

minimum_motion_frames 1


# Specifies the number of pre-captured (buffered) pictures from before motion

# was detected that will be output at motion detection.

# Recommended range: 0 to 5 (default: 0)

# Do not use large values! Large values will cause Motion to skip video frames and

# cause unsmooth mpegs. To smooth mpegs use larger values of post_capture instead.

pre_capture 2

post_capture 5


# Gap is the seconds of no motion detection that triggers the end of an event

gap 60


output_all off


# Image File Output

#turn on-off images

output_normal on

quality 75

output_motion off

ppm off


# Video Options


# turn on-off video

ffmpeg_cap_new on


# mess with this to attempt higher-lower video quality

ffmpeg_bps 500000

ffmpeg_variable_bitrate 0

ffmpeg_video_codec swf

ffmpeg_deinterlace off


# 0 turns off timelapse

ffmpeg_timelapse 5

# Valid values: hourly, daily, weekly-sunday, weekly-monday, monthly, manual

ffmpeg_timelapse_mode weekly-sunday


ffmpeg_cap_motion off

snapshot_interval 0


# Text Display Settings


# Text is placed in lower right corner

text_right %m-%d-%Y\n%T-%q


# Text is placed in lower left corner



# This option defines the value of the special event conversion specifier %C

# You can use any conversion specifier in this option except %C. Date and time

# values are from the timestamp of the first image in the current event.

# Default: %Y%m%d%H%M%S

# The idea is that %C can be used filenames and text_left/right for creating

# a unique identifier for each event.

text_event %Y%m%d%H%M%S

text_double off

locate off

text_changes off


# Target Directories and filenames For Images And Films

target_dir /home/motionDL

# File path for motion triggered images

jpeg_filename images/%v-%Y%m%d%H%M%S-%q

# File path for motion triggered ffmpeg films (mpeg)

movie_filename movies/%v-%Y%m%d%H%M%S

# File path for snapshots

snapshot_filename snapshots/%v-%Y%m%d%H%M%S-snapshot

# File path for timelapse mpegs

timelapse_filename timelapses/%Y%m%d-timelapse


# Live Webcam Server


# on means off and off means on for the live webcam server

webcam_localhost on

# Quality 1-100

webcam_quality 50

webcam_port 8081

webcam_limit 0

webcam_motion on

webcam_maxrate 1



# HTTP Based Control


# on means off and off means on

control_localhost on

# be careful this password will not be encrypted by default

; control_authentication username:password

control_port 8080

control_html_output on

quiet on


#These options allow us to write some scripts for extending the functionality

; on_event_start ./home/pi/start.sh

; on_event_end ./home/pi/end.sh

; on_picture_save ./home/pi/picsav.sh; export f=%f

; on_motion_detected ./home/pi/motdet.sh

; on_movie_end ./home/pi/vidcon.sh; export f=%f

; on_area_detected value

; on_movie_start value

; on_camera_lost value


Look at the bottom of the putty/command line interface, ctrl+s will not work. Once saved (ctrl+o [enter] ctrl+x) it’s a good idea to restart motion.


sudo /etc/init.d/motion restart


If you get a message telling you that the dameon is disabled, even though you enabled it in the .conf file, it’s another file that disables it. Just edit the file below and change no to yes and retry the command above.


sudo nano /etc/default/motion


We need to create the folder that motion will output to and let the motion user have the ability to create the sub-folders. If you are using an ftp is is not advisable to leave the top folder at permissions 777, because it may prevent vsftpd from having access. More on this later on after everything is working correctly. Make sure to walk in front of the camera a few times to get motion to make the directories that you want.


sudo mkdir /home/motionDL

sudo chmod 777 /home/motionDL


For now we will step aside and complete a few other tasks before filling out the scripts. We want to be able to access the files on Raspberry Pi without having to remove the SD card. This means we can check if things are working correctly. We need to install an ftp server. However, not just any ftp server will do. We want to be able to access the files through an encrypted ftp. There’s a program for this!


sudo apt-get install vsftpd


Again we get another one of these really long .conf files. It’s not as long as the previous one, but you should read this a couple of times to familiarize yourself with it: http://linux.die.net/man/5/vsftpd.conf.


Truly, it may be advisable to separate reading either of these by a good day or so. For the more crazy among us, back-to-back reading is preferable. Just make sure not to mix up commands from either .conf file.


sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.originalbackup

sudo rm /etc/vsftpd.conf

sudo nano /etc/vsftpd.conf


Paste this in:


# Advanced vsftp /etc/vsftpd.conf file

# credit to Phillip Moxley

# modified from original generated file most helpful comments are removed
















# Customization























After editing/saving the .conf file restart vsftpd


sudo /etc/init.d/vsftpd restart


Right now without anything else you can use Filezilla to login to port 22 using the same certificate, password, and IP address that putty is using. Make sure to use SFTP instead of FTP. If you want to ensure that the SSH cert is being used delete the saved key from the registry. Go to start, type in regedit, navigate to HKEY_CURRENT_USER>Software>SimonTatham>SshHostKeys, and delete the key corresponding to your Raspberry Pi host. After restarting Filezilla it will ask you to confirm the key again when you login. Deleting the key from the registry makes Putty/Filezilla forget about the cert, nothing else. Once you accept it again it will be put back into the registry in the same place. Clicking ‘no’ will let you use the cert key, but not cause it to be put back into the registry.


Doing it the easy way above would mean that the ftp user would have access to ALL files on Raspberry Pi. For extra security there is another way of doing the ftp encryption. We will create another user that has access only to the files created by motion and use a different type of connection (so that your SSH tunnel cannot be compromised). Obviously this second option is preferable. However, it makes things so much more complicated. We have to make an SSL cert, a new user, and all sorts of other stuff.


To encrypt the traffic for another connection we need to generate an new SSL certificate. This cert will be unsigned because it’s not intended for use with other people. It’s a personal cert generated by you.


cd /etc/ssl/certs

sudo openssl req -x509 -nodes -days 7300 -newkey rsa:2048 -keyout /etc/ssl/certs/vsftpd.pem -out /etc/ssl/certs/vsftpd.pem


If this command does not ask you a bunch of weird questions, it didn’t work. It doesn’t matter what you type in as long as the country code is only 2 upper case letters and the email address has an @ symbol in the right place. After the cert is generated make sure the permissions are good.


sudo chmod 600 /etc/ssl/certs/vsftpd.pem


In Filezilla instead of selecting SFTP, change it to FTP and set the port to 2211 (or whatever you changed it to in /etc/vsftpd.conf), and change the encryption to “Require explicit FTP over TLS”. Leaving the username:password the same it should give you a much more complicated certificate trust box. If you would like to check the fingerprint go back to the command line and try this:


sudo openssl x509 -noout -in /etc/ssl/certs/vsftpd.pem -fingerprint -sha1


Now the FTP and SSH connect differently. The SFTP will still remain accessible, but we can create a different user for connecting over the FTP via SSL. This means that only the files created by motion will be accessible to this user, and only the traffic over the SSL will be ‘listenable’ to said user. Nothing else can be messed with.


To create the new user:


sudo useradd -d /home/motionDL <username>

sudo passwd <username>


This makes the folder motion is putting it’s files into the home folder of the new user. Logging in with this user/password in Filezilla will direct you straight to this folder. However, the user still has access to all of the other files on the Raspberry Pi host. There are also a few other security concerns.


Make a dummy user


sudo useradd -d /var/run/vsftpd/empty <DummyUsername>


Go back into the vsftpd.conf file and edit the comments out of the following lines:


sudo nano /etc/vsftpd.conf


#This protects against some unauthorized access



#If this is changed to YES, then it will jail all users that are not on the list below

#doing the opposite of what we want



#This allows users to be ‘trapped’ or ‘jailed’ in their home directory



#This is a list of ‘trapped’ users



Now edit the ‘trapped’ user list and add the other user you created to control the output of motion (/home/motionDL/). Do not add the DummyUsername to this list.


sudo nano /etc/vsftpd.chroot_list


Restart vsftpd:


sudo /etc/init.d/vsftpd restart


The settings we have used should prevent you from logging in via the SSL FTP. This is because vsftpd attempts to block bad configurations. If vsftpd was to allow you to login, you would have the full ability to delete /home/motionDL. So let’s fix this and try again.


cd /home/

ls -al


The /home/motionDL/ folder should be owned by root root, and have rwxrwxrwx on it so anything can write to it or execute. This is because motion needs the ability to write the folders inside it. If it is owned or grouped with anything other than root run these


sudo chown root motionDL

sudo chgrp root motionDL


Next vsftpd wants the ftp user to lack the ability to mess with the top level folder because it is a ‘bad’ configuration:


sudo chmod 755 motionDL


Upon logging in via Filezilla, you will notice that it only displays the output folders of motion. All the captures are there, and nothing else. Keep in mind that motion should be prohibited from creating new folders using the permissions, but also should be able to add files to these folders. However, you can only download these files, and cannot delete them, rename them, or add files.


If you would like that ability simply add your ftp user to the motion group and modify the permissions of the folders.


sudo usermod -a -G motion <username>

cd /home/motionDL

sudo chmod 775 * #WARNING BE CAREFUL


Be careful with that last command if you are not in /home/motionDL/ it can cause some pretty serious damage. If you would like to remove the ability for the ftp user to modify files simply remove it from the group. The idea is to change the permissions for all of the folders that motion outputs, so the above and below commands will not work if motion has not created the folders already.


sudo gpasswd -d <username> motion

cd /home/motionDL

sudo chmod 744 * #AGAIN WARNING BE CAREFUL


Your ftp user cannot delete /home/motionDL or delete the folders created by motion. However, you can delete, add, or modify anything inside these folders. If this is the way you want it that is fine. It’s also possible to make the files read only.


Keep in mind that there are other strategies for making this work. For example, disabling “write_enable=YES” by commenting it out in /etc/vsftpd.conf so that files cannot be removed, or using virtual users https://wiki.archlinux.org/index.php/Very_Secure_FTP_Daemon#Tips_and_tricks.


Now we must make the scripts and make them executable. These commands can be run to just make the files for the scripts. It’s okay to leave nothing in them and run motion because nothing will happen if the files are empty. Also, it’s okay to just copy/paste the whole set of commands. They will all run back to back. The last one might require an [enter].


cd ~

touch start.sh && sudo chmod +x start.sh && sudo chown motion start.sh && sudo chgrp motion start.sh

touch end.sh && sudo chmod +x end.sh && sudo chown motion end.sh && sudo chgrp motion end.sh

touch picsav.sh && sudo chmod +x picsav.sh && sudo chown motion picsav.sh && sudo chgrp motion picsav.sh

touch motdet.sh && sudo chmod +x motdet.sh && sudo chown motion motdet.sh && sudo chgrp motion motdet.sh

touch vidcon.sh && sudo chmod +x vidcon.sh && sudo chown motion vidcon.sh && sudo chgrp motion vidcon.sh


type in either of these to verify that they are there:



ls -al


The next problem is that we don’t want to have to download all of these files and browse through them in order to see what is in them. It can be tedious to view a couple thousand images, or even several dozen videos. We want to simply inspect one file. There is a couple of ways of doing this but, it depends on how you plan to use your Raspberry Pi – Webcam setup.


  1. Live Host (turn off all file saving – enable host and skip ahead)
  2. Busy place (turn off video – leave images and timelapse on)
  3. Non busy place (turn everything on – add script for video concatenation)

The easiest way of inspecting the output is using the timelapse feature. However, it all depends on exactly what you are doing. If you plan on using the Raspberry Pi to host a live camera, then it might be best to disable the video, timelapse, and image capture. Edit the live webcam server settings to work. Do this in /etc/motion/motion.conf. Skip to the part about port forwarding to get the hosted webcam through your router and give it a url.


output_normal off

ffmpeg_cap_new off

ffmpeg_timelapse 0

# on means off and off means on for the live webcam server

webcam_localhost on


If you plan on using this to watch something that is always active like a restaurant or a busy street then saved video will not be useful at all. However, the stills and the timelapse will be useful.


output_normal on

ffmpeg_cap_new off

ffmpeg_timelapse 5


If you plan on watching something that is rarely active like a vault or a secret lair then all three are needed.


output_normal on

ffmpeg_cap_new on

ffmpeg_timelapse 5


This way if it’s a busy camera; it doesn’t get overloaded by the video, if it’s a not-so busy camera; it can be convenient, and if it’s a broadcast camera; there isn’t much to be done in ‘recording’.


The trick to the video is that motion does not output mpeg1/2 files through ffmpeg (go figure). In order to output a video that is easy to watch we need mpeg1/2 video files because they can be easily spliced together. We could go grab an older version of ffmpeg and try to cut through all the possible errors of installing deprecated software, but that could be problematic.


We are going to write a script to convert the video files into mpeg1/2 and splice them together. This script will kick off each time a new video is produced by motion. The script will convert the video to an mpg and add it to the end of the last one. Thus we will end up with a nice single file to download. There are some problems with this. If somebody dances in front of your camera for an hour then Raspberry Pi will not have the resources to convert the output. Thus the script needs to be told to stop if the previous video has not been processed.


Do not use this file if you plan on watching a busy scene.


cd ~

nano vidcon.sh




#credit Phillip Moxley

#runs on completion of a video event captured in motion

#converts to mpg and appends, stops if previous video has not completed

#All log and debug options removed


#How many files are in the folder

cd /home/motionftp/movies


n=$(ls *.swf | wc -l)


#Is a conversion process running right now?


p=$(ps -ef | grep bin | pgrep vidcon.sh | wc -l)


#Convert or abort

if [ $p -eq 2 ]; then

for i in *.swf


#convert the file

yes | avconv -i “${i}” -r 20 temp.mpg

#splice the video together

cat temp.mpg >> output.mpg

#remove completed files

rm $i temp.mpg



cd ~

echo $(date) dammit process is running \ >> loggylog.log



The script will execute when a video is over which is governed by the gap variable in /etc/motion/motion.conf. If the video is really long and something else comes along, it will not run anymore ensuring that Raspberry Pi doesn’t get overwhelmed. However, Raspberry Pi was not intended to be constantly re-encoding video. Basically, this script will make one big file for you to watch (that is not timelapsed). This means that if the SD card is not big enough to handle a clown dancing in your vault for 3 hours, then don’t use the script… or get a bigger SD card… or figure something else out.


There are other scripts that can run at various times. The settings in the /etc/motion/motion.conf file are explained below


on_motion_detect and on_event_start start as soon as motion is detected. The difference is that on_motion_detect will fire off even if the ‘gap’ below has not expired while on_event_start will wait until the gap time is over


The number of frames that motion sees before it starts recording governs when on_picture_save starts, and the on_movie_start starts. The difference is that on_picture_save starts at the end of a picture save and passes a %f filename variable to the command. While on_movie_start simply marks the moment a movie begins to save.


This bit is pretty self explanatory

minimum_motion_frames 1


The number of frames to keep in a video prior to a video starting. Governs how many frames are pre-pended to the video

pre_capture 2


Number of frames to capture after motion is no longer detected (default: 0). Governs number of frames that are post-pended to the video.

post_capture 5


The gap is the seconds of no motion detection that triggers the end of an event. It is the number of seconds to wait until on_movie_end and on_event_end are triggered. The difference being that on_on_movie_end passes the %f filename variable and is triggered after the video file is closed, while on_event_end happens exactly at event end time.

gap 60


You might have noticed that motion is not providing the correct time on the time-stamp in the video and images. This is caused by several issues. One of the issues is that it might be using UTC, not your local time. The other issue is that Raspberry Pi cannot store time without a battery, or a connection to the internet.


To solve the first issue, we could simply add UTC to the time-stamp. This still might confuse people.


text_right %m-%d-%Y\n%T-UTC-%q


However, it might be better to configure Raspberry Pi to output your local time zone. In this way you can replace UTC with the time-zone you are in.


sudo raspi-config


In the internationalization options pick time-zone, and set it to your time-zone. Also, in Filezilla there is a place in the site manager to adjust the time difference between the server and the client under the advanced tab.


WIRELESS NEWORKING (Networking issue No. 2)


We have a fully functioning unit now. However, we need to work on the network and connectivity issues (because the clock is off right?). Up until now we have been operating over a wired local wired network or standalone. If we want to work over a wireless network there are some steps. Additionally, if we want to access the machine from some remote location there are even more steps.


Be careful in your selection of a wireless dongle because not all dongles will work with all linux distros or Raspberry Pi. Additionally, some dongles will require a powered usb hub because the Pi cannot supply enough energy to run the wireless dongle. Even worse, if your power supply is within spec voltage (but not enough current), it may cause the dongle to be under-powered and simply not function or shut off after an amount of time. To make this issue the mother of all complexity, some dongles must be told to stay on even if power starved.


There are two methods of using wireless the ‘router’ method and an ‘adhoc’ method. For here we will assume you have a home router with wifi, and the password for this router. The ‘adhoc’ thing will be set aside for some other time.


The easy way:


sudo nano /etc/network/interfaces


add the following lines:


wpa-ssid “XXXXXX-XXXXXX”

wpa-psk “XXXXXXXXX”


Where ssid is exactly the name of the router, and psk is the password. This might not work very well, or it could be splendid.


The hard way:






If the network does not automatically connect after following the above link’s steps try enable 0, reassociate, or the help file: http://linux.die.net/man/8/wpa_cli . Also, make sure that the .conf file has your ssid and psk in it.


sudo nano /etc/wpa_supplicant/wpa_supplicant.conf










For the Power starved:


Add this to the /etc/network/interfaces file… just paste it in anywhere, like at the end.


wireless-power off


If that doesn’t work there might be some options in some crevice of linux that you can change to force a dongle to not shut off intermittently. However, it is different for each dongle.




REMOTE CONNECTING (Networking issue No. 8)


Since we have hooked up both SFTP and SSL FTP it is totally acceptable to connect to Raspberry Pi over the internet. First and foremost we need to access the local router. This means knowing the local router IP address, username, and password. First type in the local router IP address into the address bar of a web browser. Next login with the username and password.


Regrettably, most routers differ here. Every brand has a similar, but unique looking user interface. Heck there is even a type of linux ‘custom’ operating system for routers. {{LINK}}. Even better it is possible to setup a Raspberry Pi as if it was your router to do awesome things! The objective is to port forward to the Raspberry Pi. The secondary objective is to reserve an address for the Raspberry Pi.


Pick a number between 1 and 65536, but there is a long list of ports you want to avoid (e.g. 20, 21, 22, 80, 8080). Take that number and forward it to port 2211 (or whatever you changed it to) and the local IP address of Raspberry Pi. The idea is simple enough once you wrap your head around it.


Find your public IP (not to be confused with your local IP). Seach “what is my IP” or something like that . If you really want to be annoying call your ISP and ask. If you port forwarded your random number to the local IP and your listening port then it should work fine.


[Public_IP]:[Public_Port] —> [Local_IP_Address_of_Raspberry_Pi]:[2211_or_your_listening_port]

268.301.22.36:4798 —>


All of this nonsense means that you should be able to type into the address bar of your browser [Public_IP]:[Public_Port], and get the same thing you would get if you typed [Local_IP]:[listening_port]. The only difference is that the public version will work anywhere on the internet, while the local one only works at home.


Hang on, because it works just the same for Filezilla and/or Putty. This means you can fully access the thing from anywhere if you know your public IP, public port, username, password, and fingerprints. What? Yeah you might want to write down the fingerprints, unless you are using a laptop that has them saved. If you try to connect remotely and the SSH or SSL fingerprint is not the same its a bad sign. It would be wise not to type in a password because it is probably not private.


If you are like most people, your ISP has not given you a static IP address because they want more money for that. This means that at any random time you may not be able to connect to the Raspberry Pi remotely because the ISP changed it. In truth it is more of an arbitrary-random time. It all depends on your ISP. Typically, you can expect it not to change except on a blue moon. However, it’s not your decision or the moons decision.


In order to solve this we want to replace your public IP address with a url of some kind. This can be done in most router interfaces, but we’re going to do it with the pi itself instead. This will make the Raspberry Pi report on it’s location to a website at an interval of your choosing. Again, it will report it’s location to a third party if it has any connection to the internet at all. This will happen even if you cannot access Raspberry Pi to tell it to stop. There are other ways of getting around DHCP’s random IP address assignment, but this is the easiest.


It starts with:


sudo apt-get install ddclient


The installation will ask you a bunch of questions, but it’s best to type in nonsense for these. You can edit it later with this command:


sudo nano /etc/ddclient/ddclient.conf


There are some heavy privacy concerns to weigh on here. Configuring this software will make the device report it’s public ip address to the service you choose on a regular basis. This will happen weather or not it has port forwarding enabled on it’s local network. Meaning that if you cannot access the unit, it will still report it’s public IP address.


As a basic common courtesy if it is a free service, please make ‘daemon=XXX’ greater than 3600, which translates into once an hour or less. It may sound like a pain, but good manners will get you more than you expect.


dyn.com/dns/ is a website that used to offer free dynamic IP services for free, but now charge $10/year


dnsdynamic.org is a website offering the service for free.


Do some searches on exactly how to configure ddclient. This might help: http://sourceforge.net/p/ddclient/wiki/usage/


Connecting a wireless network from the cli:




AD-HOC WIRELESS NETWORKING (Networking issue No. 4)




BYPASSING RESTRICTIONS (Networking issue No. 5)


This section will not help you. It will only make you ponder things. If you do not have access to the local network router, then you may not be able to remotely access the Raspberry Pi. A few ideas about solving this issue will serve as a refreshment.


Tor browser can somewhat mitigate privacy concerns especially related to the dynamic IP host provider you have chosen.


Packets can be wrapped in an http wrapper making them appear as regular web traffic if your local network prohibits SSL or SSH packets.


It’s illegal, but Backtrac5 can be used to decrypt WPA passwords in somewhere between hours and months. Except, it requires a much more powerful machine than Raspberry Pi.


BONUS MATERIAL: AUDIO (Programming issue No. 15)


So your Raspberry Pi is keeping your secret fortress under constant private surveillance. However, if someone finds your little spy, then you don’t get to have the images/video that this fool invader steals. Hence, we need to have it upload the files to a remote ftp server somewhere offsite. We need this to also happen privately and encrypted.


The less recommended option is to use google docs, or some other free web service like mega. These will start you on your way to uploading to google docs, but I’m not very interested in that… what with current events and all.


sudo apt-get install python-gdata

sudo dpkg -i http://googlecl.googlecode.com/files/googlecl_0.9.5-1_all.deb


Since we already know how to set up an FTP server with vsftpd we can set one up wherever we like. The crux of the problem lies in the settings on this second server. You see, if this fool that broke into your lair takes this Raspberry Pi with him, he might be a super smart guy and be able to find the password to the backup ftp. As ridiculous as this sounds, it means that the script accessing the remote ftp cannot contain the password. Guess what! The google docs stuff above would require you to save your google password (or the password for the google docs account you created) in a script.


On another machine far away from your secret palace all you have to do is repeat most of the steps previously gone over. Except, change the vsftpd server to allow anonymous uploads. Before doing this I recommend reading the manual a good dozen times. http://vsftpd.beasts.org/vsftpd_conf.html


Yes, read that until you can almost quote it, because in order to keep everything secure on this server that accepts anonymous uploads, you’re going to have to be crafty. The weakness is that this invader can merely upload a bunch of nonsense files to your backup server before he arrives to steal your Pi. Of course, this is reaching ridiculous proportions.


The on_picture_save option in the configuration file passes the full path of the saved image into the command with %f. However, it is somewhat difficult to pass this variable into a script. To make things as easy as possible just add the following command to the /etc/motion/motion.conf file, and remove the part that calls the script up.


sudo nano /etc/motion/motion.conf


on_picture_save curl -v -k –ftp-ssl -T %f ftp://anonymous@[server_IP_address or URL]:[server_port]/images/


Curl should come installed by default. There are several ways of uploading to a remote ftp. However, they all have some kind of problem or another. For example, it appears wput has dropped ssl/tls support. FTP uploading is easy enough to do without ssl, there are many simple tools for uploading via the command line. For example wput can easily upload just like curl. However, wput appears to have dropped support for ssl/tls. Without ssl it might be possible to view the images as they are passed to the remote ftp server.


An example of wput without ssl:


sudo apt-get install wput

sudo nano /etc/motion/motion.conf


on_picture_save wput %f ftp://[serverIP or serverURL]:[Server Port]/images/


Another alternative using lftp. Lftp is a fully configurable command line ftp client. It’s nothing less than a command line version of Filezilla. Lftp brings up it’s own sub-command-line interface. The difficulty here is in passing the %f filename path into the command.


sudo apt-get install lftp

sudo nano picsav.sh



#picsav.sh for whole folder upload. credit: Phillip Moxley

lftp -c ‘set ftp:ssl-force true; \

set ftp:ssl-protect-data true; \

set ftp:ssl-allow true; \

set ftp:ssl-allow-anonymous true; \

set ftp:ssl-auth TLS; \

set ssl:verify-certificate false; \

connect ftp:anonymous@[host_url_or_IP]:[hostport]; \

lcd /home/motionDL/images/; \

cd /images/; \

mput * ‘


This script will attempt to upload the entire folder every time an image is saved, so it’s not very useful for the images as it will just overload Raspberry Pi with nonsense. However, you can modify it for use in a cron to upload backups of videos or audio.


Speaking of audio, motion does not record audio. Most webcams have mics on them, and it seems a shame to waste the thing.


There is a way of using arecord (a linux default) to record audio



However, sox is much better. http://linux.die.net/man/1/sox


sudo apt-get install sox


There are several things that you might want to do with audio. Once you think about it, it makes perfect sense for motion to leave audio out of the mix. There is just too much for such a simple tool to command.


First, you might want to add audio to the movies that motion outputs. However, this would mean that the audio might be cut short or long. The motion detect would govern when the audio starts and stops. There might be pieces that you miss this way, or extra long bits of silence.


Second, you can run sox in a way that it records audio in the same manner as motion. This means that sox can capture audio when it occurs. In essence this would be like a ‘audio detector’ in the same way motion is a ‘motion detector. The drawback on this is that the audio will rarely match up to the images/video. There will be bits of audio with no video and some images/video without any audio.


Thirdly, another idea is that you might want to stream audio alongside the hosted webcam. Either way, it is highly unlikely that the audio will be uploaded to a remote server if discovered. This is the weakness of the audio. This makes streaming the most highly desired option. As streaming means immediate upload.


Just to show we can do it let’s add audio to the video output that we created earlier. Add the following line to the start.sh script. It will start recording whenever an ‘event’ starts. This is for audio to be added to the video only, so it should be in the /movies directory.


AUDIODEV=hw:1,0 AUDIODRIVER=alsa rec /home/motionDL/movies/audio.wav &


Something that may bug you to no end is that your user ‘motion’ which runs all of the motion related things in the background does not have the ability to run the above command. This is because it is not in the “audio” group. So let’s add it to that group and reboot to make sure it’s all in order.


sudo usermod -a -G audio motion

sudo reboot


To check that it worked try this and look for the group named audio.


cat /etc/group


Another way to check would be to run the script under the user ‘motion’ just to check if it runs. However, you might have to use ctrl-c to get out of it.


sudo -u motion ./start.sh


If you are having trouble with the hardware (or AUDIODEV) part of this whole operation, then try restarting a couple of times. Also, this tool might help a little.


sudo apt-get install uvcdynctrl


If the start.sh script is putting audio into the /movies folder then we can add a few lines to the vidcon.sh script and it will combine everything for us


cd ~

sudo nano vidcon.sh


Change the script to look more like this:




#credit Phillip Moxley

#runs on completion of a video event captured in motion

#converts to mpg and appends

#stops if previous video has not completed


#stop the audio reording

killp=$(ps -ef | grep motion | grep rec | awk ‘{print $2}’)

kill -2 $killp


#How many files are in the folder

cd /home/motionftp/movies


n=$(ls *.swf | wc -l)


#Convert or abort

if [ $n -eq 1 ] ; then

#grab the gap seconds from the config file

vidgap=$(cat /etc/motion/motion.conf | grep gap | awk ‘{print $2}’)


#trim the excess audio off of the file per the gap

sox -v 8 -r 8000 audio.wav choppedaudio.wav trim 0 0:$vidgap


#add audio to video compile command

yes | avconv -i *.swf -i choppedaudio.wav -r 20 temp.mpg


#finishes it all up

cat temp.mpg >> output.mpg

rm *.swf temp.mpg recaudio.wav choppedaudio.wav


cd ~

echo dammit_too_many $(date) >> /home/motionftp/movies/loggylog.log



It might sound like a good idea to mix the audio in with the video. However, it really isn’t the most ideal. It puts some heavy processing loads placed on Raspberry Pi to incessantly re-encode all sorts of video and mix it with audio. It cuts down on everything.


The only thing left would be to figure out the whole streaming to a private and/or public server. If you want some clues on how to stream the audio/video this link will help:










arecord -f S16_LE -r 22050 -D plughw:%t /home/motionftp/audio/%Y%m%d_%H%M



b=0; for i in *.swf; do avconv -i “$i” -c:v mpeg2video -r 20 “b.mpg” ;b=b

+1 ; done


b=0; for i in *.swf; do avconv -i “$i” -c:v mpeg2video -r 20 “$b.mpg” ;b++

; done


for i in *.swf; do


x=0; for i in $(ls -t *swf); do counter=$(printf %05d $x); ln -s “$i”

“$counter”.swf; x=$(($x+1)); done




cd /home/pi/motion/movies/ && cat *.swf | ffmpeg -i – -ar 44100 tmp.flv &&

mv tmp.flv VideoOutput`date +%m%d%y`.flv




-something to combine audio pieces

-silence to chop of ends of long silence


-a set up prototype upload scripts

-ftp host on kip, mutley, and a card ready for pi2


-some more vsftpb reading >> figure out the whole user problem


-a cron to stop motion, move files to archives,

-a cron to check disk space used by motion and delete accordingliny– also

send a red flag up.


cd /home/motionftp/movies

avconv -i *.swf -r 20 temp.mpg

ps -ef | grep avconv | grep swf



ps -ef | grep recaudio.wav | awk ‘{print $2}’ | xargs kill -SIGINT






sudo apt-get install chkconfig

chkconfig <service> off


sox output.wav -n stats -s 16 2>&1 | awk ‘/^Max\ level/ {print int($3)}’

#vsftp /etc/vsftpd.conf anon host, credit to Phillip Moxley


# Customization










#chown_upload_mode 0444




















#Anonymous Security





#Access Settings





Ubuntu Bitcoin Mining Guide for Radeon cards

Step 1: Download and Burn Ubuntu 12.04

Go to http://www.ubuntu.com/download/desktop and download Ubuntu 12.04 LTS

Burn the image file ‘ubuntu-12.04.2-desktop-i386.iso’ to a CD (or DVD). The image will work with a DVD being that it is under 700MB, but a DVD will work if you don’t have any CD’s around. If installing on a UBS, it is not necessary to burn the image to disk.

Before installing any software make absolutely certain that there is good airflow to the GPU cooling fans. Many of the things here will test your hardware. Follow at your own risk.


Step 2: Install Ubuntu

Use unetbootin http://unetbootin.sourceforge.net/ if you are booting from a UBS flash drive.

Insert the disk, or USB drive and complete the installation. You may be operating this without a hard drive, so the USB drive may be the installation.

NOTE: Remember to set the option to login automatically. If you want to change this option

sudo nano /etc/lightdm/lightdm.conf

add lines to file:



Step 3: Ensure Updates are Installed

Open a terminal window and run:

sudo apt-get update
sudo apt-get upgrade
sudo reboot


Step 4: install openssh-server and enable desktop sharing

sudo apt-get install openssh-server

Obtain your local IP address with:

ip addr show
4: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
 link/ether cf:8f:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
 inet brd scope global virbr0
 inet6 XXX0::XXX:1X:fXX:11/XX scope link

Steps for connecting from a remote location come later. This is the local IP address good if you are connected to the same router as the server.

If you intend on using the ubuntu GUI then you need to enable desktop sharing. This can be done in Ubuntu itself, by searching for ‘desktop sharing’, or with the following command.



Step 5: Download PuTTY and login remotely (over your home network)


Putty is merely a small executable used for making an encrypted connection to the terminal of the server remotely. Make sure to put the local IP address in, and port 22. Save the session for later use, and click ‘Open’.

If this is the first time, you should see this window. Click ‘Yes’. You should only see this if you have never connected with this computer before.

Login to the server in the resulting window, and you now have terminal access.

You can save the username and/or password in the settings, but it is not recommended. You can also send a shortcut to the desktop, and edit the properties to include the username/password, but again this is not recommended.

C:\...\putty.exe -load -l -pw .bitcoin/bitcoin.conf


Step 6: Download TightVNC to view the GUI remotely

Download TightVNC and install.


You are only going to be using the client so, disable any startup/TightVNC-server stuff in your system tray in Windows. Run TightVNC Viewer

You may need to add a port 0, 5900, 5901, or 5902 after the local IP address of the server. Also, there is a small chance this will error on the first few attempts. Retry. Retry. Retry. To have this connection fully encrypted you might want to skip ahead to the ‘offsite connection’ step before proceeding. It’s not totally needed on a local/home/secure/trusted/protected/private/safe/router-firewalled/etc. network.

If you have multiple graphics cards, there is a possible issue involving the ‘Remote Desktop’ crashing due to there being multiple monitors to choose from. You can still login using PuTTy, and run the following commands based on http://ubuntuforums.org/showthread.php?t=1603059

sudo nano one_display.c


/* File: one_display.c */
int gdk_display_get_n_screens(void *p)
{ return 1; }




gcc -o one_screen.so -shared -fPIC -s one_display.c
sudo mv one_screen.so /usr/lib/vino

Next, go to the 'Startup Applications' and add a new entry, you will not be able to do this remotely.

Name: vino
Command: env LD_PRELOAD=/usr/lib/vino/one_screen.so /usr/lib/vino/vino-server --sm-disable

Log out and log back in using Unity 2D. If you would rather use the CLI, then editing the startup config is easy.

sudo /usr/lib/lightdm/lightdm-set-defaults -s ubuntu-2d

Reboot to make sure everything is working correctly. It may take a few attempts because vino is notoriously buggy in Ubuntu. After a few restarts and pokes it will probably start working correctly.

sudo reboot


Step 7: Install the Graphics Card Drivers

This will identify your graphics cards:

lspci -nn | grep VGA

In the GUI interface in TightVNC go to System Settings → Additional Drivers. Pick a driver and click ‘Activate’. The proprietary/post-release drivers are recommended

If you happen to be using the putty terminal this will allow the next commands to work:

export DISPLAY=:0

Setup the xorg.conf file:

sudo aticonfig --adapter=all --initial --force
sudo reboot

Make sure it worked in the putty terminal:

export DISPLAY=:0
aticonfig –-lsa
aticonfig –adapter=all --odgt 
aticonfig --adapter=all --odgc

The output should look similar to this mishmash:

display: :0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon HD 6999 Series
OpenGL version string: 4.2.12172 Compatibility Profile Context 12.10.17
display: :0 screen: 1
I eat tomatoes
cucumbers are for hitler

* 0. 01:00.0 AMD Radeon HD 6820 Series
1. 02:00.0 AMD Radeon HD 6147 Series

* - Default adapter

Adapter 0 - AMD Radeon HD 6000 Series
Sensor 0: Temperature - 40.00 C

Adapter 0 - AMD Radeon HD 6900 Series
Sensor 0: Temperature - 220.00 C

Adapter 0 - AMD Radeon HD 6400 Series
Core (MHz) Memory (MHz)
Current Clocks : 930 1002
Current Peak : 924 1080
Configurable Peak Range : [75-1002] [100-1250]
GPU load : -3%


As a side note. If you are not using a monitor attached to a graphics card, then you might want to consider a ‘Dummy Plug’. Obviously, this voids all warranties. However, if you do not attach something to the graphics cards other than ‘–device=0’ on ‘–platform=0 poclbm may not utilize it. Making a dummy plug is pretty simple. http://www.overclock.net/t/384733/the-30-second-dummy-plug


If the link breaks, these are the clues to the images in the link :

“68 ohm resistors from RadioShack“
“you want to bridge the top three pins on the right with the pins directly below one-to-one.“
…Trapezoid shorter side down.
“Alternate the resistors so the leg of one is against the body of another to avoid shorting out the jumpers.”


I love the last bit. Continue on to the end to realize the importance of placing a load on the graphics card in order to utilize it.


Step 8: Install SDK

Download SDK from the AMD website. The wget command will not work because they need you to agree to the license terms. So open TightVNC and download it from the website using firefox. http://developer.amd.com/tools/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/. Alternately, just search for “AMD STK”. For here we are using version 2.7, as it seems to be the optimal Ghash/s for the specific cards used in the creation of this tutorial.

This seems to make SDK work correctly:

sudo apt-get install libglu1-mesa-dev

The Installation of the downloaded SDK:

cd ~/Downloads
tar xvf AMD-APP-SDK-v2.7-lnx32.tar
sudo ./Install-AMD-APP.sh

Do not delete these files (you might use them again later).


Step 9: All the rest or the stuff; Bitcoin Client, OpenCL, and proclbm (or proclbm gui)

Based on https://bitcointalk.org/index.php?topic=2636.35;wap2, but heavily modified.

cd ~
sudo apt-add-repository ppa:bitcoin/bitcoin  
sudo apt-get update
sudo apt-get install python-pyopencl subversion git-core python-wxtools bitcoin-qt

svn checkout https://github.com/bmjames/python-jsonrpc
cd ~/python-jsonrpc/trunk/
sudo python setup.py install
cd ~ 

mkdir ~/.bitcoin
echo "rpcuser=user" > .bitcoin/bitcoin.conf
echo "rpcpassword=password" >> .bitcoin/bitcoin.conf

git clone http://github.com/m0mchil/poclbm

Go to 'Startup Applications' and add a new entry:

Name: bitcoin
Command: bitcoin-qt -min

After signing up for a mining pool, the miner should now run using a command similar to:
cd ~/poclbm
python poclbm.py stratum://<user>:<password>@<mining-pool-url>:3333 –-no-bfl

It's a pretty sweet joke to have '--no-bfl' in there, but it's required if you don't have any FPGA, LPGA, or PGA devices attached. Also, if you are going golfing any time this month you may not want to leave out the flag '--no-bfl'. The following commands will enable you to check if it is running correctly. Feel free to launch multiple terminals.

export DISPLAY=:0
aticonfig –-lsa
aticonfig --adapter=all --odgt
aticonfig --adapter=all --odgc

Good Source:



Step 10: Write a script to make it easier to run

The idea is to make an extremely basic script file so that editing the command is easy to run. Added advantages are that it can be edited quickly, and set to run automatically. However, there is some delicacy required.

Write an extremely basic script:

cd ~
nano i.sh

Copy/Paste in your mining launch commands:

cd ~/poclbm
python poclbm.py stratum://:@:3333 –-no-bfl
sudo chmod +x i.sh

Now running it is pretty easy:


To run the miner in the background from a remote terminal. This will keep the process running when you close the terminal (or remote terminal).

nohup ./i.sh &

It would be safe to make a backup before setting the server to mine upon startup, because its easy to back yourself into a corner. If something breaks, you might not want to mine on startup.


Step 11: backing up, making a USB/DVD boot copy.

WARNING: as of 4/28/2013 Remastersys is no longer available. Skip to end for ‘relinux’ instead of ‘remastersys’


Be careful installing or using remastersys, as it requires ‘sudo su’. This command removes all safeties in linux, and means any command can be run without needing sudo+password. It sounds cool to not have to type sudo or your password, but its very risky.

Install remastersys:

sudo su
wget -O - http://www.remastersys.com/ubuntu/remastersys.gpg.key | apt-key add -
nano /etc/apt/sources.list

add these lines at the end:

#Remastersys Precise
deb http://www.remastersys.com/ubuntu precise main

Freshen up and install:

apt-get update
apt-get upgrade
apt-get install remastersys remastersys-gui

For here we will be using the cli interface. However, the gui is pretty solid. There are two specific commands ‘backup’ and ‘dist’. Each has it’s own benefits and drawbacks. For here we will keep it basic, and use ‘backup’.


This will produce a copy of the installed software minus a lot of things that will require redoing. For example, it will not include proprietary drivers. Also, the user and user files will not be included like scripts, startup settings, the mining client, and even your wallet. This is because the dist option is meant for producing a copy that you can distribute to your friends, or for replicating over multiple servers.


This option is easiest because it makes a complete backup of the entire system. It will include all of the user settings. During installation of the backup, you will have to make a dummy user just to get through the process. The dummy user is just to make the installation work. However, drivers will need to be reinstalled.

Either way remastersys will not work with autologin enabled. To disable auto login prior to starting:

sudo nano /etc/lightdm/lightdm.conf

Comment out these lines by adding a # symbol to the beginning of any line that has the word autologin:


Save, exit, reboot.

If you check the disk with the Disk Analyzer you will notice that ./Bitcoin is full of junk. This is the blockchain, and there is no reason to back it up.

sudo nano /etc/remastersys.conf

This next command can take a long time. While this runs make sure not to have anything running, or try to start anything. It will produce an iso backup if the system is less than 4GB.

sudo remastersys backup mining_backup.iso

If it fails because the filesize is to large you can consider removing libre office, thunderbird, or any large installations, and trying again. Also, go through your downloads and delete the larger files it should do the trick.

sudo apt-get remove --purge libreoffice* thunderbird*
sudo apt-get clean
sudo apt-get autoremove

After it finishes, and you have confirmed that you have the output file has been made, you can turn autologin back on and cleanup the leftover remastersys files:

sudo nano /etc/lightdm/lightdm.conf
sudo remastersys clean

To make a UBS bootable copy of the iso you just created use unetbootin (this program appears to require the use of the gui) http://unetbootin.sourceforge.net/

sudo apt-get install unetbootin
unetbootin method=diskimage isofile="/home/remastersys/remastersys/mining_backup.iso"

You can also burn the image to a DVD (but not likely to a CD). The Disk or USB can later be used to quickly reinstall everything if the system goes down. Obviously you will have to reinstall the drivers, and reconfigure a few things.

You turn autologin back on:

sudo nano /etc/lightdm/lightdm.conf

Remove the # symbols you placed in either:


You can go to the ‘Startup Applications’ and add a new entry:

Name: mining
Command: ./i.sh

Alternately, give it a good long delayed start.

Name: mining
Command: ./i.sh -p 120

For most, this is enough. However, you may want to postpone this step until you are completely satisfied with your setup.

UPDATE: use ‘relinux’ instead of ‘remastersys’


sudo add-apt-repository ppa:relinux-dev/testing
sudo apt-get update
sudo apt-get install relinux


cp /etc/relinux/relinux.conf ./relinux.conf
sed -i 's:EXCLUDES="\(.*\)":EXCLUDES="\1 '`readlink -f ./relinux.conf`'":g' ./relinux.conf
readlink -f ./relinux.conf

Use the GUI version (in other words run this command from a terminal on the machine, not putty):

sudo relinux

Remember to add “/home//.bitcoin” to the excludes portion, diable auto-login, and all the same things as remastersys.

Use this instead of ‘remastersys clean’

sudo rm /home/relinux -r


Step 12: Setup conky to display feedback


Conky is a tool for displaying information on the desktop. It’s not a puppet from the Trailer Park Boys.

sudo apt-get install conky curl lm-sensors hddtemp

hdtemp will request to run as a dameon, let the defaults go,

Setup conky:

sudo cp /etc/conky/conky.conf .conkyrc

Add it to the ‘Starup Applications’ with a delayed start. The delayed start prevents a known bug:
Name: conky
Command: conky -p 30

sudo reboot

Editing the conky configuration can be done on the fly.

sudo nano .conkyrc

Add the following lines to the file, just after TEXT. This should give a quick readout of the information on your graphics cards immediately after saving (ctrl+o)


${color slate grey}GPU:${color } ${execi 10 aticonfig --odgc --odgt --adapter=all | egrep -i "clock|load|temperature" | xargs echo | awk '{print $9 " " $4 "MHz " $23 "C\n " $18 " " $13 "MHz " $29 "C"}'}
 ${color slate grey}GPU Fanspeed:${color } ${execi 10 aticonfig --pplib-cmd "get fanspeed 0" | grep -i result | awk '{print $4}'}

If conky crashes, or locks up run:

killall -SIGUSR1 conky

You may have to do some quick modification to the settings section to stabilize the program. Settings are usually at the top of a .conkyrc file before the TEXT section.

alignment top_right
 background yesA
 double_buffer yes
 xftfont DejaVu Sans Mono:size= own_window_class Conky
 own_window_type root
 own_window_transparent yes
 own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager


There are tons of places to obtain .conkyrc files that others use. For example: http://ubuntuforums.org/showthread.php?t=281865&page=1922&p=11680710#post11680710

Usually people develop their own and share it with others.


Step 13: Advanced remote connecting (offsite connecting)

The first task is to login to your home router and port forward to the local IP address of the mining machine. This will vary depending on your router, home network, and ISP. Typing “” or “routerlogin” into the address bar on your browser should get you to the login for your router.

Get your public IP address (this is not to be confused with your local network IP address)

curl ifconfig.me
curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/< .*$//'
wget -q -O - checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'

To check what ports are open:

sudo apt-get install nmap
 nmap -A -PN
 nmap -PN
 netstat -ntl |grep :22


A good practice task is to run TightVNC through a putty tunnel over the local network. So far TightVNC has been running wide open on your local home network. This is typically okay, but when accessing from remote, the tunnel is needed to encrypt the connection. This prevents anyone from listening in on the connection.

If TightVNC sever is installed as a service, you need to go to windows “Computer Management” and disable it. Otherwise you will get the infinite window problem.

In putty load the session you typically use. Go to Connection → SSH → Tunnels. Source Port: “5900”, Destination: Open the connection and login to the server. If it connected, open the TightVNC Viewer, but this time type in This will connect as usual except that closing putty will now break the connection. This is because putty is being used by TightVNC to encrypt a tunnel to the server.


Save the session as something like <local_tunnel>

Some ISPs block, or filter ports. To solve the blocked ports there are two methods.

First, simply port-forward on the router from public/open port 80 to private port 22 on the local IP of the server. This may take some time to settle in, so give it a few minutes. Running nmap can help sort this out by showing what is open/filtered/closed.

nmap <public-ip-address> -PN

Second, if you are behind extremely heavy censors then http tunneling is your tool. This puts an http wrapper around each ssh packet and bypasses the filters. However, this can be a bit more risky as it is plain as day to anyone looking in the packets that they are really ssh encrypted.


If you want to go for the holy grail, you can attempt https (or ssl) tunneling which would allow for double-encryption. Being that bitcoin hash files are encrypted you might be able to argue that this is really triple-encryption, but that is discussion for some other time.


It’s very unlikely that you have a static IP address. A dynamic IP address means that your IP can change at some random time. It’s not a every-day occurrence, so you’re safe for now. However, months down the road it can leave you high and dry. There are three possible solutions to this problem.

1. Go back home and get the new IP address, then replace it in Putty. (No-cost)
2. Sign up for a service that hosts a url that specifically redirects to your home network. This service will redirect to your IP address if it is changed ($25/year)
3. Sign up for a VPN, preferably based in Sweden, that will give you a static IP address and some additional privacy. ($5-$10/ month)

To solve this issue consider dyndns.com. It’s free for 2 weeks, and then $25 a year.


Setting up dyndns can be as easy as typing your dyndns.com username/password into your router’s configuration. It can be as difficult as setting up a dameon to report to dyndns.com it’s IP address every few hours. The outcome is that you type the url that dyndns gives you into the host name section in Putty, rather than the IP address. This step can be done later, after you have finished configuring the setup, and are ready to shell out some dough.

Swedish based, but run by pirates (accepts bitcoins)
Swedish, but expensive (accepts bitcoins)

To add some privacy consider running the miner over a VPN. It’s preferable to have your VPN based in Sweden (both incorporated and servers). Take note that this will make dyndns unnecessary as a VPN will give you a static IP address to connect to. Also, logging into it via the VPN IP address from home voids any privacy benefit gained. Your ISP will now link the IP address directly to you. If you do setup a VPN for your mining server, do not log into it from a connection that can be traced back to you (in other words you are wasting your money).

There are two flavors PPTP and OpenVPN. Installing OpenVPN is simple:

sudo apt-get install openvpn


For now, we will forgo the details of the two options that require money.

In putty merely type your public IP address into the Host (instead of your local IP address) means that the connection will bounce off of public servers. Don’t worry it’s encrypted. If it connects, repeat the tunneling task, so that you can connect TightVNC over the encrypted connection.

Again, in TightVNC connect to, and you are ‘remote connected’ even if you are sitting at home. Now if you are at another location logging in and checking your mining setup is possible. It’s even safe over public wifi. However, if the connection is weak you may not be able to use the GUI. A weak connection will not display any clicks that you put into the VNC client, but it will send them. So if you click a lot, you won’t see what you are doing (be careful).

Now that it is feasible that someone can gain access to your bitcoin wallet remotely, it would be a good decision to encrypt the bitcoin-qt wallet that you have. Obviously this adds an annoyance, but it is well worth it. An attacker may gain access to your machine, but access to your wallet is totally unacceptable.

Last bit, after all the anguish of going through all that, more pain. Add on some extra security that will prevent any would be hacker from gaining any meaningful access. Obviously, this task should be saved until you are fully happy with your setup. Extra security will basically prevent any changes being made to the server.



Step 14: Optimization

WARNING: Some of the things in cgminer can damage your hardware.

A more advanced mining tool cgminer allows for tweaking. Typically most graphics cards should operate between 80C-90C when fully loaded and airflow inside the tower is adequate. The chips can operate at temperatures above 120C. However, it all depends on too many factors. Performance, Lifetime, Efficiency are all things to be considered. Personally, I do not want my cards to go above 85C, no matter how many Mhash/s I can get from them. Poclbm offers very few options to optimize for anything. cgminer allows for optimizing settings, at the risk of ruining your hardware. So for this step we are going to ignore all of cgminer’s bells and whistles and opt for a basic setup.

First install all of the packages that aid in using cgminer:

sudo apt-get install curl libcurl4-openssl-dev libncurses5-dev pkg-config automake yasm

Clone the package with:

git clone git://github.com/ckolivas/cgminer.git cgminer

Download some more SDK files from http://developer.amd.com/tools-and-sdks/graphics-development/display-library-adl-sdk/ Keep in mind that you will have to use firefox again as AMD requires clicking a license agreement.

Go to your downloads, unzip it, and copy the applicable files to cgminer

cd ~/Downloads
unzip ADL_SDK_5.0.zip
cd ~/Downloads/include/
cp adl_defines.h adl_sdk.h adl_structures.h ~/cgminer/ADL_SDK/
cd ~/cgminer

Look for the following, if it says NOT found, then something went wrong.

ADL..................: SDK found, GPU monitoring support enabled

Now install and check that it can see your hardware. The -n flag should print out something indicating that it recognizes your hardware.

sudo make install
./cgminer -n

If it does recognize your hardware, then test that it can mine with your pool. A cleaner interface than your used to should show.

cgminer -o stratum:// -u -p

Go back and add this line to your ‘i.sh’ file just for backup purposes. Remeamber you can put a # at the front of a line, and it will be disregarded (or saved for future editing).

Cgminer has a great number of options, you can read the –help file to get familiar with it. However, it cannot be stressed enough. If you use these options you risk buring up or otherwise permanently damaging all of your hardware (not just the graphics cards).

man cgminer
cgminer –help

If you don’t want to waste tons of time perfecting your .conkyrc file then search for a way to make the start-up in a displayed terminal.

Name: mining
Command: ./i.sh -p 120

After everything is settled, working correctly, and good to your liking. Go back and repeat Step 11. However, this time burn a copy to a DVD. This will backup everything but the drivers and your wallet. Making sure that you don’t have to go through all these steps again.

Projects & Links

Colonize the Moon
About moon/mars/asteroid colonies and space stuff

Computer Dungeon
linux/raspberry-pi/bitcoin and computer stuff

A video upload server I coded myself