Understanding convergence is important when modelling CFD, often its easier to understand this visually than it is to scroll through the log file manually. I created a script that will take the log from simpleFoam, and dump the residuals into a CSV file, or create a HTML report.
The former can be read into excel and graphed, or the HTML report can be viewed directly. It uses flot and provides a scrollable, zoomable interface to the data.
I have a pretty accurate drawing of the Seven, from the drawings i’ve created models in OpenFoam. I want to do the same for the Fury, however I don’t have drawings of the bodywork. Drawing freehand in Solidworks would take me forever, so I decided to try scanning the car using an Xbox Kinect and the openkinect libraries.
The first thing that you need to do is get a point cloud from the xbox kinect, this is not quite straightforward as it seems. First the sync_get_dept() returns a disparity image, not cartesian points, so some conversion is required. The openKinect wiki has most of the background information required. The calibkinect library can do most of the calibration work, this can be formatted into whatever format suits you best. I chose wavefront’s format as it stores the point normals and can be read by both MeshLab and Paraview.
Getting a single disparity image is quite easy, however stitching many together is more challenging. Fundamentally in order to scan an object you need to know the following:
Where the camera is relative to a fixed point in space
Which direction the camera is pointing.
Assuming you know these two things, you can do a fairly simple matrix transformation on the points to move them into the appropriate location. Initially I tried to find a clever way to triangulate the position and direction of the kinect, it would be great to be able use it like a wand, however this requires significant levels of accuracy, and the only sensor on board is a tilt sensor. GPS wont cut it, I had some ideas about using a bunch of cameras, or ultrasonic sensors, but in the end I shelved that idea for now.
Instead I decided the easiest way to get a usable scan was to do things the old fashioned way. I only need to scan half the car, as its symetrical. As such I created a box around the car. This allowed me to position the camera at a known location. I used a tripod to hold the kinect, and took images at one meter intervals from about one and a half meters away from the car.
I then cleaned each mesh in MeshLab so that I just have the car, and no background. Then loaded each one into paraview and applied transformations to each file until they were lined up based on the measurements I’d taken earlier. This worked pretty well but required some manual adjustments for the first image in each direction.
The end result is good enough for a first attempt. I’ll load the file into Solidworks and start drawing.