Wigglecam

Introduction to Physical Computing

26 Sep 2018

The last couple of weeks we have been going over microcontroller basics through Arduino. We discussed topics such as analog and digital inputs, outputs, resistors, etc. For this week I wanted to push myself to use hardware that I am unfamiliar with and I decided to fabricate a project that has been on my to-do list for quite a while - The Wigglecam.

Background:

A wigglegram, also known as a stereograph, are animated images that simulate 3d by looping frames of an object from different vantage points. While stereoscope have been around since 1838, there has been a large resurgence of them in digital media today in the form of GIFs.

There has been a rise of phone apps to assist in making wigglegrams by taking a video or panoramic photogram and removing frames to make a gif. Instead I aim to make a more traditional stereoscope camera by using two cameras to simultaneously take photographs to produce the GIF.

stereoscope image

Process:

Objective:

To make a Wigglegram camera, or a Wigglecam, I will use a Raspberry Pi and two usb cameras to generate the gif. I aim to make it tactile by using switches to operate the camera and LED lights to give feedback as the rpi processes the images. When processed the Raspberry Pi will upload the rendered wigglegram to a server for the world to see. :smile:

Parts list:

Software:

My setup: My setup

Making a GIF

I started the project by first making the camera function. Since I have two Logitech USB webcameras handy. I’m using fswebcam to interface with the camera.

Get fswebcam package by sudo apt-get install fswebcam in the terminal. You can take an image by using the command fswebcam image_name.jpg. 💯

Next we need to be able to access both cameras to take pictures. We do this with the flag -d /dev/video0 and -d /dev/video1 respectively. Another useful flag is -r 640x640 to set the resolution. Also, you can turn off the banner time stamp with --no-banner.

Now that we can make our cameras take pictures at will I’ll write a script to automate the process. To allow the camera to take two pictures simultaneously I am using the subprocess module which frees up the program to run while processes are operating.

wigglegram.py:

import subprocess

# start subprocesses
cameraOne = subprocess.Popen("sudo fswebcam -d /dev/video0 --no-banner -r 640x640 image_1.jpg", shell=TRUE)
cameraTwo = subprocess.Popen("sudo fswebcam -d /dev/video1 --no-banner -r 640x640 image_2.jpg", shell=TRUE)

# wait for them to finish
cameraOne.wait();
cameraTwo.wait();

# print a completion statement
print("Took the pictures!");

Now we can take our pictures by typing python wigglegram.py :stuck_out_tongue:

Image 1 from script ![Image 2 from script]( “image 2)

Next we need to join the pictures to make a gif. We’ll use the cli tool imagemagick. You can install it by:

sudo apt-get update
sudo apt-get install imagemagick

You can use imagemagick’s convert call to turn the disparate images to a gif. Looks like: convert -loop 0 -delay 10 -size 640x640 image_1.jpg image_2.jpg output.gif

Wigglegram

There are a few handy flags to note. Loop sets the number of loops the gif will run. (0 is infinite) Delay sets the delay before the next image shows. (Uses milliseconds). Size sets size and finally you list the images you want to include in the gif and finally the name of the output file.

Since imagemagick is a command-line tool I need to import OS to make a system call within my python script.

gify.py:

import os

os.system("convert -loop 0 -delay 10 -size 640x640 image_1.jpg image_2.jpg output.gif")

print("Finished the Gif!")

At this point we can take photos with wigglecam.py and afterwards we can render a gif with gify.py. Next lets refactor these scripts into one and prepare it for the hardware.

First I turned each of my scripts into functions. Note I import gify.py here and call the function after I take the pictures. This way it will automatically render the gif.

wigglegram.py:

import subprocess
import gify

def photograph():
	c1 = subprocess.Popen("sudo fswebcam -d /dev/video0 -r 640x640 --no-banner image_1.jpg", shell=True);
	c2 = subprocess.Popen("sudo fswebcam -d /dev/video1 -r 640x640 --no-banner image_2.jpg", shell=True);
	c1.wait();		
	c2.wait();
	
	print("took the pictures!");
	gify.render_gif();	

gify.py:

import os
def render_gif():
	print("making gif \n\n")
	os.system("convert -loop 0 -delay 20 -size 640x640 image_1.jpg image_2.jpg output.gif")

	print("finished making gif!")

Next I’m going to wire a simple circuit in order to initialize the scripts. I followed some help at Raspberry Pi HQ.

Here is the circuit for the switch:

Switch schematic

My switch is connected to GPIO 10. Refer to RPi pinout diagram. The actual circuit:

Circuit

To program the switch we must import the RPi-GPIO library. In my case I had to install the library since I’m working on an older Pi. sudo apt-get install python-rpi.gpio python3-rpi.gpio

Program the button to take the picture! Make sure to import wigglecam!

button.py:

import RPi.GPIO as GPIO
import wigglecam

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(10, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)	
	
def button_callback(channel):
	wigglecam.photograph()

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(10, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)

GPIO.add_event_detect(10, GPIO.RISING, callback=button_callback)

message=input("Press button to take a picture...\nPress enter to quit\n\n")
GPIO.cleanup()	

Finally you can run program with the command sudo python button.py and when you press the button it will take a photo and render a gif for you!

Wigglegram

Voila! 💁 👍

Next part will cover adding LED feedback, and making housing for the camera.