Introduction: CribSense: a Contactless, Video-based Baby Monitor

CribSense is a video-based, contactless baby monitor that you can make yourself without breaking the bank.

CribSense is a C++ implementation of Video Magnification tuned to run on a Raspberry Pi 3 Model B. Over a weekend, you can setup your own crib-top baby monitor that raises an alarm if your infant stops moving. As a bonus, all of the software is free to use for non-commercial purposes and is easily extensible.

The full repository containing source files and documentation can be found at https://github.com/lukehsiao/CribSense.

While we think CribSense is pretty fun, it is important to remember that this is not actually a certified, foolproof safety device. That is, it needs to be properly configured and have a well-controlled environment in order to work. For example, if it is not calibrated well and/or the environment in the video is not conducive to video magnification, you may not be able to use it. We made this as a fun project to see how well we could have compute-heavy software like video magnification run on compute-limited hardware like a Raspberry Pi. Any real product would require much more testing than we have done. So, if you use this project, take it for what it is: a short exploration of video magnification on a Pi.

What you will need:

Raspberry Pi + Camera + Configuration Tools:

IR LED Circuit for low-light operation:

Chassis:

  • Access to a 3D printer (minimum build volume = 9.9" L x 7.8" W x 5.9" H) to print our chassis. However, feel free to construct your own.
  • Glue (any type of glue will will work, but hot glue is recommended for prototyping).

Step 1: Prerequisites

Before you start our step-by-step guide, you should have already installed the latest version of Raspbian on your SD card and ensured that your Pi is functional. You will also need to enable the camera module before you can access camera.

Step 2: Installing the CribSense Software

CribSense depends on autoconf, libtool, OpenCV, and libcanberra, as well as common software tools.

  • autoconf and libtool are used to automatically configure makefiles and build scripts for CribSense on many platforms (like Linux, OSX, and the Raspberry Pi).
  • OpenCV is a powerful computer vision package used to do image processing and is the basis of the video magnification and motion detection code. It has great support, is easy to use, and has good performance.
  • libcanberra is a simple library for playing event sounds. It is used to play the alarm sound for CribSense.

Visit their individual pages to get full details.

Install these by opening a terminal on your Pi and running:

sudo apt-get install git build-essential autoconf libtool libopencv-dev libcanberra-dev

Next you need to set the camera driver to autoload by adding bcm2835-v4l2 to `/etc/modules-load.d/modules.conf`. Your modules.conf should look like this:

# /etc/modules: kernel modules to load at boot time. 
#
# The file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

i2c-dev
bcm2835-v4l2

Once the file has been edited, you must reboot your Pi. This driver is used by CribSense to directly pull frames from the NoIR Camera.

Then, you can clone the repository by running:

git clone https://github.com/lukehsiao/CribSense.git

Next, move into the repository and build the software by running

cd CribSense
./autogen.sh --prefix=/usr --sysconfdir=/etc --disable-debug
make
sudo make install
sudo systemctl daemon-reload

Congratulations, you have installed all the necessary software!

Configuration

CribSense is customizable through a simple INI configuration file. After running `make install`, the configuration file is located at /etc/cribsense/config.ini. You can view and edit these parameters by running

sudo nano /etc/cribsense/config.ini

A brief explanation of each parameter is given in the default configuration, but more details are available at https://lukehsiao.github.io/CribSense/setup/config/. We will also discuss calibration and configuration at the end of this guide.

Running CribSense

CribSense was designed to run at startup by using a systemd service. While you are connected to your Raspberry Pi with your keyboard and mouse, you should make sure that the configuration parameters work for your crib. You may need to re-tune these parameters if you move it.

While you are tuning the parameters, you can run cribsense at will from the command line by running

cribsense --config /etc/cribsense/config.ini

Once you are satisfied, you can enable autorun by running

sudo systemctl enable cribsense

You can stop cribsense from running automatically by running

sudo systemctl disable cribsense

Software Overview

The CribSense software is the heart and soul of this project. We saw some of the great demos of video magnification from MIT, and wanted to try and run a similar algorithm on a Raspberry Pi. This required more than a 10x speedup from the work of tbl3rd on his C++ implementation of video magnification in order to run in real-time on the Pi. The required optimizations guided our design of the software.

At a high level, CribSense repeatedly cycles through a software state machine. First, it divides each 640x480, grayscale video frame into 3 horizontal sections (640x160) for better cache locality. It then magnifies each band in a separate thread, and monitors the motion seen in the frame. After monitoring motion for several seconds, it determines the primary area of motion and crops the frame to it. This reduces the total number of pixels the algorithm needs to process. Then, CribSense monitors the amount of motion in the cropped stream and sounds an alarm if no motion is perceived for a configurable amount of time. Periodically, CribSense will open its view again to monitor the full frame in case the infant has moved and re-crop around the new primary area of motion.

Video magnification is used to boost the signal to noise ratio of subtle movements like infant breathing. It would not be necessary for larger movements, but can help for very subtle movements. Note that our implementation is loosely based on the algorithm described in MIT's papers, and does not perform as well as their proprietary code.

Optimizations like multithreading, adaptive cropping, and compiler optimizations gave us approximately 3x, 3x, and 1.2x speedup, respectively. This allowed us to achieve the 10x speedup required to run real-time on the Pi.

Full details can be found on the Software Architecture page of the CribSense repository.

If you are interested in video magnification, please visit MIT's page.

Step 3: Getting Your Hardware Ready: Connect Your Camera

First, you to swap the 6" cable that came with the camera with the 12" cable. To do this, you can simply follow this tutorial on how to replace the camera cable.

In summary, you will see a push/pull tab on the back of the camera that you can pull out to release the flex cable. Replace the short cable with the longer one and push the tab back in.

You will notice that we have a 24" cable in our pictures. It was too long. The 12" cable on the materials list is a much more reasonable length.

Step 4: Getting Your Hardware Ready: IR LED

CribSense is relatively easy to construct, and is largely made up of commercially available parts. As seen in the figure above, there are 5 main hardware components, only 2 of which are custom made. This page will walk through how to construct the IR LED circuit, and the next page will go over how to construct the chassis.

For this part, you need to get your soldering iron, wires, diodes, IR LED, and resistor. We will be constructing the circuit shown in the 2nd figure. If you are new to soldering, here is a nice guide that will catch you up. While this guide discusses through-hole soldering, you can use the same basic techniques to connect these components together as shown in the 3rd figure.

In order to provide adequate lighting at night, we use an IR LED, which is not visible to the human eye but visible to the NoIR camera. The IR LED does not consume a lot of power compared to the Raspberry Pi, so we leave the IR LED powered on for the sake of simplicity.

In earlier versions of the Pi, the maximum current output of these pins was 50mA. The Raspberry Pi B+ increased this to 500mA. However, we just use the 5V power pins for simplicity, which can supply up to 1.5A. The forward voltage of the IR LED is about 1.7~1.9V according to our measurements. Although the IR LED can draw 500mA without damaging itself, we reduce the current to around 200mA to reduce heat and overall power consumption. Experimental results also show that the IR LED is bright enough with 200mA of input current. To bridge the gap between 5V and 1.9V, we use three 1N4001 diodes and a 1 Ohm resistor in series with the IR LED. The voltage drop over the wire, diodes, and resistor is about 0.2V, 0.9V (for each one) and 0.2V, respectively. Thus, the voltage over the IR LED is 5V - 0.2V - (3 * 0.9V) - 0.2V = 1.9V. The heat dissipation over the LED is 0.18W and 0.2W over the resistor, all well within their maximum ratings.

But we are not done yet! In order to get a better fit in the 3D printed chassis, we want to have the IR LED lens protrude from our chassis and have the PCB board flush with the hole. The small photodiode in the bottom right will get in the way. To remedy this, we desolder it and flip it to the opposite side of the board as shown in the final two photos. The photodiode is not needed since we want the LED to always be on. Simply switching it to the opposite side leaves the original LED circuit unchanged.

When soldering to the wires, make sure that the wires are at least 12 inches long and have pin headers that can slip over the Pi's GPIOs.

Step 5: Getting Your Hardware Ready: Chassis

Source Files:

We used a simple 3D printed chassis to house the Pi, the camera, and the LED. Using our chassis is optional, though recommended to prevent young children from touching exposed electronic circuitry. Every crib is different, so our chassis does not include include a mounting bracket. Several mounting options could include:

If you have access to a MakerBot Replicator (5th Generation), you can simply download the .makerbot files for the case and cover onto your MakerBot Replicator and print. It takes about 6 hours to print the case and 3 hours to print the cover. If you are using a different type of 3D printer, please keep reading.

A minimum build volume of 9.9" (L) x 7.8" (W) x 5.9" (H) is required to print CribSense. If you do not have access to a 3D printer with this build volume, you can use an online 3D printing service (such as Shapeways or Sculpteo) to print CribSense. The minimum print resolution is 0.015". If you are using a fused filament fabrication type 3D printer, this means that your nozzle diameter needs to be 0.015" or smaller. Printers with lower print resolutions (larger nozzle diameters) may work, but the Raspberry Pi might not fit into the chassis. We recommend PLA (polylactic acid) as the preferred printing material. Other plastics may work, but the Raspberry Pi may not fit in the case if the thermal expansion coefficient of the chosen plastic is larger than that of PLA. If your 3D printer has a heated build plate, turn off the heater before proceeding.

Orienting the model on your printer's build plate is critical for a successful print. These models were carefully designed so they do not need to be printed with support material, thus saving plastic and improving print quality. Before proceeding, download the 3D files for the case and cover. When printing these models, the neck of CribSense must lay flat on the build plate. This ensures that all overhang angles on the models do not exceed 45 degrees, thus eliminating the requirement for support material. For instructions on orientating 3D models in the build volume of your printer, please refer to the instruction manual that comes with your 3D printer. Examples for the build orientation of the case and cover are shown above.

In addition to putting the neck of CribSense flat against the build plate, you may notice that the models are rotated around the vertical axis. This may be necessary to fit the model inside the build volume of your 3D printer. This rotation is optional if the length of your build volume is long enough to accommodate CribSense.

Step 6: Getting Your Hardware Ready: Assembly

Once you have all the hardware ready, you can begin assembly. Any glue can be used in this process, but we recommend hot glue for two main reasons. Hot glue drys quickly, so you do not need to wait a long time for the glue to dry. In addition, hot glue is removable if you make a mistake. To remove dried hot glue, soak the hot glue in in rubbing (isopropyl) alcohol. We recommend 90% concentration or higher, but 70% concentration will still work. Soaking the dried hot glue in isopropyl alcohol will weaken the bond between the glue and underlying surface, allowing you to peel the glue off cleanly. When soaking the glue in isopropyl alcohol, the Raspberry Pi should be powered off and unplugged. Be sure to let everything dry before reapplying hot glue and booting the Raspberry Pi.

All of the pictures for these steps are in order and follow along with the text steps.

  1. Insert the Raspberry Pi into the chassis. You will need to flex it a bit to get the audio port in, but once it is in, the audio jack will keep it in place. Once it is in place, be sure that all of the ports can still be accessed (e.g. you can plug in the power cable).
  2. Next, use hot glue to tack the Pi into place and attach the camera to the Pi. There are screw holes as well if you prefer to use those.
  3. Now, glue the LED and camera to the front cover (pictured). Start by hot gluing the NoIR camera to the camera hole. Be sure that the camera is snug and lined up with the chassis. Do not use too much glue; otherwise, you will not be able to fit the camera into the main case. Be sure to power on the Pi and take a look at the camera (`raspistill -v`, for example) to make sure that it is angled well and has a good field of view. If it is not, remove the hot glue and reposition it.
  4. Next, glue the IR LED to the hole on the neck of the cover. The neck is at a 45 degree angle to side light the crib, which results in more shadows in low-light situations. This adds more contrast to the image, making it easier to detect motion.
  5. Attach the IR LED wires to the Raspberry Pi's header pins as shown in the schematic picture.
  6. Pack the cables into the chassis in a way that does not crease or strain them. We ended up folding the cable accordion style because our camera flex cable was too long.
  7. With everything tucked in, hot glue around the edges where the two pieces meet, sealing them in place.

Step 7: Calibration

Details about configuration parameters can be found in the CribSense repository documentation. Also view the video to see an example of how you might calibrate CribSense after you have everything set up.

Here is a sample of the configuration file:

[io]                  ; I/O configuration
; input = path_to_file  ; Input file to use
input_fps = 15          ; fps of input (40 max, 15 recommended if using camera)
full_fps = 4.5          ; fps at which full frames can be processed
crop_fps = 15           ; fps at which cropped frames can be processed
camera = 0              ; Camera to use
width = 640             ; Width of the input video
height = 480            ; Height of the input video
time_to_alarm = 10      ; How many seconds to wait with no motion before alarm.

[cropping]            ; Adaptive Cropping Settings
crop = true                 ; Whether or not to crop
frames_to_settle = 10       ; # frames to wait after reset before processing
roi_update_interval = 800   ; # frames between recalculating ROI
roi_window = 50             ; # frames to monitor before selecting ROI


[motion]              ; Motion Detection Settings
erode_dim = 4           ; dimension of the erode kernel
dilate_dim = 60         ; dimension of the dilate kernel
diff_threshold = 8      ; abs difference needed before recognizing change
duration = 1            ; # frames to maintain motion before flagging true
pixel_threshold = 5     ; # pixels that must be different to flag as motion
show_diff = false       ; display the diff between 3 frames

[magnification]       ; Video Magnification Settings
amplify = 25                ; The % amplification desired
low-cutoff = 0.5            ; The low frequency of the bandpass.
high-cutoff = 1.0           ; The high frequency of the bandpass.
threshold = 50              ; The phase threshold as % of pi.
show_magnification = false  ; Show the output frames of each magnification

[debug]
print_times = false ; Print analysis times

Calibration of the algorithm is an iterative effort, with no exact solution. We encourage you to experiment with various values, combining them with the debugging features, to find the combination of parameters most suitable to your environment. Before you start calibration, make sure show_diff and show_magnification is set to true.

As a guideline, increasing amplification and the phase_threshold values increases the amount of magnification applied to the input video. You should change these values until you clearly see the movement you want to track in the video frame. If you see artifacts, reducing the phase_threshold while keeping the same amplification might help.

The motion detection parameters help compensate for noise. When detecting regions of motion, erode_dim and dilate_dim are used to size the dimensions of the OpenCV kernels used to erode and dilate motion so that noise is first eroded away, then the remaining motion signal is significantly dilated to make the regions of motion obvious. These parameters may also need to be tuned if your crib is in a very high-contrast setting. In general, you will need a higher erode_dim for high contrast settings, and a lower erode_dim for low contrast.

If you run CribSense with show_diff = true and you notice that too much of the accumulator output is white, or some completely unrelated part of the video is detected as motion (e.g. a flickering lamp), increase the erode_dim until only the part of the video corresponding to your baby is the largest section of white. The first figure shows an example where the erode dimension is too low for the amount of motion in the frame, while the next one shows a well calibrated frame.

Once this has been calibrated, make sure that the pixel_threshold is set to a value such that "Pixel Movement" only reports the peak values of pixel movement, and not all of them (which means you need to cut out the noise). Ideally, you will see output like this in your terminal, where there is a clear periodic pattern corresponding to the motion:

[info] Pixel Movement: 0	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 44	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 161	 [info] Motion Estimate: 1.219812 Hz
[info] Pixel Movement: 121	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 86	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 97	 [info] Motion Estimate: 0.841416 Hz
[info] Pixel Movement: 74	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 60	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 48	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 38	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 29	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 28	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 22	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz
[info] Pixel Movement: 0	 [info] Motion Estimate: 0.839298 Hz

If your output looks more like this:

[info] Pixel Movement: 921	 [info] Motion Estimate: 1.352046 Hz
[info] Pixel Movement: 736	 [info] Motion Estimate: 1.352046 Hz
[info] Pixel Movement: 666	 [info] Motion Estimate: 1.352046 Hz
[info] Pixel Movement: 663	 [info] Motion Estimate: 1.352046 Hz
[info] Pixel Movement: 1196	 [info] Motion Estimate: 1.352046 Hz
[info] Pixel Movement: 1235	 [info] Motion Estimate: 1.352046 Hz
[info] Pixel Movement: 1187	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 1115	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 959	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 744	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 611	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 468	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 371	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 307	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 270	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 234	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 197	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 179	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 164	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 239	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 733	 [info] Motion Estimate: 1.456389 Hz
[info] Pixel Movement: 686	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 667	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 607	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 544	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 499	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 434	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 396	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 375	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 389	 [info] Motion Estimate: 1.229389 Hz
[info] Pixel Movement: 305	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 269	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 1382	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 1086	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 1049	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 811	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 601	 [info] Motion Estimate: 1.312346 Hz
[info] Pixel Movement: 456	 [info] Motion Estimate: 1.312346 Hz

Adjust pixel_threshold and diff_threshold until just peaks are seen, and pixel movement is 0 otherwise.

Step 8: Demonstration

Here is a little demo of how CribSense works. You will have to imagine that this is attached to the side of a crib.

When you position CribSense over your crib, you will need to optimize the distance between the infant and camera. Ideally, your infant's chest will fill less than 1/3 of the frame. The child should not be too far away, or else the low-resolution video will struggle to find enough details to magnify. If the camera is too close, the camera might not be able to see your child if they roll or move out of the frame. Similarly, if the child is under a "tented" blanket, where there is limited contact between the blanket and the child's chest, it may be difficult to detect motion. Tuck them in well!

You will also want to consider the lighting situation around your crib. If your crib is right next to a window, you might get moving shadows or changing light values as the sun is blocked by clouds, or movement happens outside the window. Somewhere with consistent lighting is best.

With some more work, we think that someone could improve our software so that calibration is a much smoother process. In the future, additional features like push notifications could also be added.

Step 9: Troubleshooting

You may encounter a few common issues while setting up CribSense. For example, having trouble building/running the program, or not hearing any audio. Remember, CribSense is not a perfectly reliable baby monitor. We would welcome contributions on our GitHub repository as you make improvements!

Here are some troubleshooting tips we have gathered while making CribSense.

No alarm is playing

  • Are your speakers working?
  • Can you play other sounds from the Pi outside of the CribSense alarm?
  • If your Pi trying to play audio through HDMI rather than the audio port? Check the Raspberry Pi Audio Configuration page to make sure that you have selected the correct output.
  • Is CribSense software detecting motion? If CribSense is running in the background, you can check with journalctl -f in a terminal.
  • If CribSense is sensing a lot of motion, you may need to calibrate CribSense.

The IR LED is not working

  • Can you see a faint red color when you look at the IR LED? A faint red ring should be visible when the LED is on.
  • Check the polarity of the connections. If +5V and GND are reversed, it will not work.
  • Connect the LED to a power supply with a 5V/0.5A voltage/current limit. Normally, it should consume 0.2A at 5V. If it does not, your LED may be malfunctioning.

CribSense is detecting motion even though there is not an infant

  • Have you properly calibrated CribSense?
  • Remember, CribSense is just looking for changes in pixel values
    • Are there any shadows moving within the frame?
    • Is there flickering or changing lighting?
    • Is CribSense mounted to a stable surface (i.e. something that will not shake if people are walking by it)?
    • Is there any other sources of movement in the frame (mirrors catching reflections, etc)?

CribSense is NOT detecting motion even though there is motion

  • Have you properly calibrated CribSense?
  • Is there anything in the way of the camera?
  • Are you able to connect to the camera from Raspberry Pi at all? Check by running raspistill -v in a terminal to open the camera on the Pi for a few seconds.
  • If you look at sudo systemctl status cribsense, is CribSense actually running?
  • Is your infant under a blanket that is "tented" up so that it is not making contact with the child? If there are significant air gaps between the blanket and the child, the blanket may mask the motion.
  • Can you see the motion if you amplify the video more?
  • Can you see the motion if you tune the low and high frequency cutoffs?
  • If this is happening in low-light only, did you make sure your calibration works in low-light?

CribSense does not build

  • Did you install all of the dependencies?

I cannot run cribsense from the commandline

  • Did you accidentally mistype anything when you ran ./autogen.sh --prefix=/usr --sysconfdir=/etc --disable-debug during your software build?
  • Is cribsense present in /usr/bin?
  • What path is provided if you run "which cribsense"?