Stephen McDowell  ::  Work

What I am currently involved in, and things I have worked on in the past.

Current Work

Deformable Scanning

My current research involves using a variety of depth cameras like the Microsoft Kinect (both versions) and the Intel RealSense F200 and R200 to infer geometry of deformable objects. The focus is to the reconstruct articulated motion of objects by having a user interacting with objects in real-time (e.g., a desk lamp, a pair of scissors).

The framework supports a few of the more common camera models, but is easily extensible through a pure virtual DepthCamera class. The code can be built as a library or an application, and supports CUDA, Compute Shaders, or a CPU version depending on the hardware available.

Virtual Reality

I took an exciting new course last year where developers were paired with architects / designers to explore the limits of virtual reality. We worked with various game engines such as Unity and Unreal, but I found that these engines severely limited our ability to change how rendering is performed (e.g. the ability to perform deferred rendering). If you want to use the defaults, these engines are excellent. However, if you want to change core functionality (my group was doing pre-computed radiosity) these engines are too entrenched to make this manageable for a semester project.

I am currently developing a miniature framework that handles all of the OpenGL and OpenVR initialization. The goal is to make it as easy as possible to begin developing a VR application, where the user only needs to implement their own OpenGL setup and scene rendering method. The framework handles the rest, including:

  • The ability to work offline (without a headset).
  • Native camera models for 2D (fly camera) and 3D (HMD follow camera) are provided.
  • Cross platform build including linking to all of the various libraries provided by OpenVR.
  • Two-pass (left-right) or one-pass rendering depending on the mode.
  • Complete control to change all or none of the above (e.g. implementing different controls) depending on user preferences.

Instructor: Unix Tools and Scripting

Though I am capable of functioning in Windows, I much prefer to hack my way around dependencies in Linux. I was a student in this course a few years ago, and was thrilled to hold the torch last round. I spent a lot of time redesigning how the course executes, placing competency in git as one of the central themes. Technically git is not Unix specific, but overall the students seemed to appreciate the work-flow.

This was the first course I led on my own, the course website for last year is here. I am currently reworking the materials, and am excited to making it even stronger this year.

Documentation Automation

Working on top of an excellent Sphinx extension called breathe, I resurrected the Doxygen style class and file hierarchies to be automatically generated by my library: exhale. Wielding breathe and Doxygen, the library automatically generates the library API in restructured text documents. This enables post-commit hooks with Read The Docs to generate the library API on the fly, without needing to explicitly enumerate all of the classes you want documented.

It’s about 90% complete at this point, it gets everything generated it should, but there are some issues with templates and differences in what is generated locally vs on Read The Docs. The differences are minor, sometimes functions or enums will not link back to the “File” page that they were originally defined. At this point, the cause for both problems are known, but the solutions are in progress.

Cornell Scientific Software Club

I am currently the Vice President of the Cornell Scientific Software Club, a place for students and faculty across a wide range of disciplines gather to discuss the latest and greatest in scientific computing at large. I maintain the website, help present topics, find interested parties for presenting, etc.

We are also upgrading our club’s cluster right now to RHEL 7, I will be administering the new configurations such as additional compilers, Python3, different MPI implementations and whatnot. If you have not heard of it, you should take a look at the spack package manager. It’s not perfect, but it generally does an incredible job at automating some of the more tedious aspects of configuring a cluster with the best tools.

Previous Work

Realistic Image Synthesis

01_average_visibility_sponza.jpg
Average visibility integration (average visibility of surface points seen by the camera) of the sponza model.
02_veach_ems.jpg
The classic Veach test integrated with just emitter sampling.
03_veach_mats.jpg
The classic Veach test integrated with just material sampling.
04_veach_mis.jpg
The classic Veach test integrated with multiple importance sampling.
05_table_path_mats.jpg
Microfacet BRDF (Beckman distribution and Smith shadow masking), path traced based off materials.
06_table_path_mis.jpg
Microfacet BRDF (Beckman distribution and Smith shadow masking), path traced based off importance sampling between emitters and materials.
07_table_ppm.jpg
Microfacet BRDF (Beckman distribution and Smith shadow masking), progressive photon mapping.
08_cbox_foggy_mis.jpg
The Cornell Box, a homogeneous volume using an isotropic phase function, integrated using multiple importance sampling.
09_scatter_cube_mismatched_mats.jpg
A homogeneous volume using the Henyey-Greenstein phase function (g=0.4), integrated using material sampling.
10_steam_mats.jpg
A heterogeneous grid volume using an isotropic phase function, integrated with material sampling.
11_velvet_mats.jpg
A heterogeneous grid volume using an isotropic phase function, integrated with multiple importance sampling.

A selection of some of the more entertaining projects.

I had a great deal of fun working on projects for this class (CS 6630), from all of the different integrators, shading models, etc. we encountered to the theory behind them. The course released assignments in Java, but my partner and I chose to work in the original C++ framework maintained by Wenzel Jakob. A little more effort was required since some of the assignments were different, but exposure to Nori was an excellent lesson in cross-platform C++ software engineering. The basic philosophy: build up a ray-tracer step by step over the semester!

This course provided a thorough introduction to the mathematical formulations of the rendering equation, including how different representations (e.g. for volume integration) can be derived. I was simultaneously enrolled in a different course with access to a cluster of Xeon Phi boards, and it was an eye-opening experience on how to properly farm out a large rendering task to different nodes. There were complications given the limitations of the Xeon Phi compiler (no C++11 / aligned allocator support), but it left me ready for more cluster work in the future.

The sheer precision and elegance of offline rendering are unrivaled in computer science at large, in my not-so-biased opinion.

Realtime Rendering

For my realtime rendering course (CS 5625) we were exposed to a variety of interesting topics such as deferred shading pipelines, linear blend skinning, bump mapping using tangent spaces (like this article), anisotropic mipmapping and a lot more! We also covered various signal processing techniques to better understand aliasing and antialiasing, as well as techniques to compress data such as storing vectors in spherical coordinates among others.

For the final project, my partner and I imported our serial implementation of Smoothed Particle Hydrodynamics (SPH) into the realtime domain. We created a minimalist deferred rendering pipeline in C++, translated our SPH solver from Java to C++, and then translated the serial C++ SPH implementation into CUDA. We ultimately had significant numerical stability issues with our CUDA SPH solver (never identified…), so we stripped down the NVIDIA Particles CUDA Sample so we could focus on rendering.

We followed this article on how to generate normals, blur them, and got up to just before introducing specular reflections. This project was entertaining to engineer, and was my first experience directly linking CUDA and OpenGL to use the same data. This greatly increases the capabilities of the program. I added in rotating the computational cube by just rotating the gravity direction to have a little interactivity, which was quite entertaining to play with. Throw in a cube-map and its a party.