Anthropopmetry with Stereo Cameras

Anthropometry is a science that was first developed in the 19th century and was primarily used to understand the process of evolution. It refers to obtaining systematic measurements of the human body of which the core elements are height, weight, body mass index (BMI), body circumferences (waist, hip, and limbs), and skinfold thickness. Most recently, trends in anthropometric research have shifted towards the use of 3D data. . In India, however, physical anthropometric tools such as stadiometers (height), scales (weight), and calipers (skin thickness) are being used for most purposes. This project aims to challenge this status quo and bring in digital tools for anthropometric measurements in India.


Stereoscopy is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis. The reason our prototype is able to predict human measurements from a digital source is due to the presence of a stereo camera. A stereo camera is a type of camera with two or more lenses with a separate image sensor for each lens. This allows the camera to simulate human depth vision, and by using two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels.

The 3 people team from ilabs helps officials prioritize the bins and help for a faster and more efficient waste disposal from public roads also coded an algorithm which would help garbage collection vans take the shortest route and cover a maximum number of garbage bins on the way, thus, saving time and fuel.

  • The only major hardware requirement is the Coral Dev Board – a single‐board computer containing an Edge TPU coprocessor.
  • In terms of software, PoseNet (vision model under the TensorFlow platform) is the Coral submodule used for pose detection and GStreamer is the open source multimedia library used for streaming the video inputs and outputs.
  • The project is primary coded in Python.

The first step in creating the plugin was to construct a stereo camera setup. This was done using two phones taped together, such that the cameras were in a horizontal line with each other. While not an ideal setup, this seemed to work well enough for testing. Calibration of the two cameras was then performed using tools from OpenCV. This allowed us to calculate the Q matrix of the stereo camera system, which could then be used to project points into a 3D space.

The plugin uses PoseNet, a pose estimation model from Google. The key points from both video streams are first identified, and then matched with each other. Then, using the Q matrix, the 2D coordinates of the key points are projected into 3D real coordinates. Distance between the desired points is then calculated using the Euclidean distance formula.

The use of PoseNet comes with both benefits and limitations. One benefit was that PoseNet has been optimized to run on the Coral Dev Board, which is what we used to run our prototype. However, there are a limited set of key points that can be detected with PoseNet, which meant that the closest estimate of the height that could be achieved was the distance between the eye and the ankle.

Another issue we faced was that outliers in the data were causing the height value to fluctuate often. This was solved using an exponential moving average, and by clipping the outliers to within one standard deviation from the mean. By clipping the values rather than ignoring them, we allow the plugin to work effectively even if the person in the frame changes during the course of the video stream, as after a few seconds, the height average will catch up to this new height.

Further improvements to this project could be achieved through a variety of different means. Constructing a better stereo camera setup would help to improve the accuracy of the measurements. Using a different model could allow for better key points to be able to calculate the height more effectively. Finally, adding image processing elements to the pipeline for noise reduction could help to improve the accuracy of PoseNet or another pose estimation model.

State governments, service organizations, health departments, and statistical organizations spend lakhs of rupees each year to collect anthropometric data from people residing in villages in India. This is done for collection of health datasets to analyze medical requirements, survey data and many other purposes. The use of older physical measuring devices is very time consuming and laborious. By using this stereo camera setup, this process can be as simple as people successively standing in front of the camera, and the computer recording anthropometric data (and possibly linking it to a centralized health profile). In addition, this data can be shared with government hospitals and doctors to eliminate the need for measurement each time patients visit the hospital. The primary advancement this project needs to become commercially usable is a software user interface for data collection and storage. In addition, identification of a device cheaper than the Coral will benefit in reaching the most remote villages where funding is limited.

People behind the project

Eshaan Bhansali

University of California, Berkeley

Anuraag Agarwal

University of Illinois at Urbana-Champaign