Support real time image / sensor draping onto terrain
This will require custom shaders and materials to be added to Cesium. Currently I am unable to build any Cesium versions greater than 1.27 (current is 1.30)
I was thinking along the lines of a straight down image capture, using OpenCV to stitch 4 photos and getting the GPS points of the center of the images then using GDAL to georeference the image.
@brizey02 I like your idea and proposed workflow! Is this something you have existing experience in? In the PR linked above I was draping my webcam onto terrain with a lens distortion model. Whats nice about this is that it works well for high slant angles and hilly terrain. However, I'm less worried about live draping of videos now, with more of a focus on NADIR images (which are more useful for search and rescue and mapping IMO). If you have anymore thoughts on the topic feel free to throw them in this issue. Cheers, Sam
I have tinkered with OpenCV, Version 3.0 and up has some really good stitching options but haven't worked with them. Also from another perspective for NADIR, If a way exists to show the camera view (Something like tower does) could maybe put it on the map using image bounds and rotation.
Hi, drawing a real time outline of the sensor footprint is something that I have also experimented with and was somewhat more simple than full motion video. Basically I was drawing the footprint all the time and then draping an image on there terrain when a photo is triggered. Once a series of images were captured it could be extended to stitch images and drape them as a whole / tiled dataset. Would this be more useful for you than full motion video?
I would say static images, Video is nice but it's the raw imagery that I think is more important. By the way here is my UAS/911 Ops portal, DEM, Contours, 911 house numbers!

Nice! Looks great! What are you using to generate and host / serve the tile set? I have always planned to have more overlays with weather, custom imaginary (from previous photogrammetry missions), etc but have yet to give it a go.
QGIS with QTiles, Nexrad comes from IEM's (https://mesonet.agron.iastate.edu/GIS/) server, Had Geoserver going with WMS but it was kinda slow (I had two instances of cesium running, I would like to make one set to a GoPro FOV so I can tell what house or area the live stream is looking at directly and the other one as a moving map or synthetic display) So latest idea which is show is to pre-render tiles as a TMS dataset on the server and make the map click query the PostGIS DB with all the feature info. GPS from ground units and ADSB will probably be GeoJSON from the PostGIS DB with filters by range. I was thinking of making the house numbers as labels so they rotate with the map but haven't got yet. I really appreciate all the work you put into the module, I hope to get something to fly soon and some telemetry radios so I'm going to add an overlay with live video when I do!
On the camera NADIR integration would it be possible to set the Cesium.PerspectiveFrustum(options) to the GoPro FOV, Then capture the geospatial bounds on the camera trigger? Also, I was thinking about some gimbal tracking/following code also based on the RC channel input for interacting with the synthetic view?
Hi, if I understand correctly your looking for a 'camera view' mode that shows what the camera would see based on its FOV and current attitude relative to the vehicle. e.g. say I have a fixed wing with a camera on a gimbal, you would like the scene to represent (as close as possible) what the camera on the gimbal can see at that point in time. Is this correct?
Yes!
Okay, great. I had something similar to that working prior to putting the project on github so it shouldn't be too hard to get working. Perhaps this weekend I'll have some more time to experiment. Cheers
And I'm back! lol, I'm working on trying to make cesium transparent so a bottom div can be a video stream with the layers overlayed. Something like Churchill Navigation.
So something like this https://www.youtube.com/watch?v=S_D2Kx_-LpA or this https://www.youtube.com/watch?v=HXGacs29qrE would be a great target
More FPV related but adding elements like what is shown in this https://www.youtube.com/watch?v=IA18RIQS2og video world also be awesome.
Yes! When you were working on overlaying video did you ever work with the cesium FOV? I'm trying to get a grasp on the width or to use the diagonal degrees?
Sorry, I haven't tried controlling the camera FOV or aspect ratio yet... The code I was working on controlled the texture directly rather than the view. One thing to note is that the angles are all in radians rather than degrees. https://cesiumjs.org/Cesium/Build/Documentation/Camera.html
CesiumMath has a bunch of helper functions for converting if you need them. https://cesiumjs.org/Cesium/Build/Documentation/CesiumMath.html?classFilter=math
You think you will have the NADIR and bounding box coordinates code soon?
@brizey02 Hi, sorry for the delay getting back to you. I had a small push to support some of the features that fnoop required. I'll have a look this weekend for you but basically there are three components to what you are attempting to do:
- Define the rotation of the camera with respect to the vehicle and apply the offset (your goal is NADIR)
- Set the FOV of the camera when in 'mount' mode
- Create four rays with unit vectors defining the FOV of the camera. On each update attempt to get the terrain location and altitude from a scene pick which is the intersection of the ray and the terrain
EDIT: For point 2 above the following seems to work:
viewer.camera.frustum.fov = Cesium.Math.toRadians(30); // this works
viewer.camera.frustum.aspectRatio = 1.0; // this works aspectRatio = width/height
The FOV is a little tricky:
The angle of the field of view (FOV), in radians. This angle will be used as the horizontal FOV if the width is greater than the height, otherwise it will be the vertical FOV.
so we will need to check the page if it is resized and adjust to suit. There is a value called fovy which can be checked to see what the current vertial FOV is.
got the fov fixed, camera footprint would be great, and maybe the ability when the camera is triggered by MAV link it could post the rays max min lat long to a PHP script or something.
Hi, can you let me know what the vertical and horizontal FOV of your camera is? This way I can set some useful defaults for the application. Also, what is the orientation of your camera? Gimbal or fixed?
I have the ray intersection getting close to finished...

There are a few camera MAVLink messages, which one are you looking for in particular? CAMERA_FEEDBACK ?
Cheers, Sam
I would just base off the gopro 4 - https://gopro.com/help/articles/Question_Answer/HERO4-Field-of-View-FOV-Information
I have the solo gimbal setup if that helps.
Okay, thanks. I'll have to see if the gimbal sends info to the ground station or autopilot. If it sends messages I can move the footprint and mount view to match the camera. For the time being I'll start with a fixed mount and add the gimbal feature later.
@brizey02 I have some semi working code pushed to the sensor_footprint branch. Under the view tab there is now a 'mount' view. The plan is to make this view match the sensor footprint as close as possible. Its pretty close at the moment, but only from me hand tweaking values.
Requires a lot more work to make it useful but its a start...

@brizey02 I have merged the sensor_footprint branch into master. You can turn on the footprint display from the options menu. Currently the camera values and offsets are hard coded. The working plan is to fix that in time. For now hopefully this gives you something to try out / test.
Works great for me! I wish I had an NVIDIA TX1 to test it on and see how it performed, I want to build a ground station out of one.
I have not considered making a GCS out of a TX1... Would you use a touch screen? I would like to add persistent outlines of the footprint where the images were taken (based on cam trigger events) and the ability to click the footprints on the ground to interact with the image that was taken. How that will work exactly is TBD but its something I'll look into next.
Just playing with OpenCV's 3.4.0 Stitching modules. It seems to stitch NADIR pretty good as a scan. I was thinking of adding a "Special Color" to each image on the center with less than 30% overlap and then looking for the colors and using the pixel positions as control points for GCP's and GDAL to georeference them.
That's a neat idea! Does the cv pipeline support incremental stitching or is a batch operation after the mission has been completed and all images are available? I have some code that enables point projection to dted data which might be useful for georeferencing your 'special colour' points.
I started with a bash script. Still working on something to fly but the idea is since I'm going to use 802.11 for now capture an image on my pc ground station, The first image is just copied to result.png then the others are stitched on to the result.png which works good so far. It seems to only take a few seconds to stitch on my machine. I also am using small images as tiles. I prob need to append them to a to be processed list but just been tinkering. I want to dynamically add the image to mavcesium every minute or so hopefully.
Would you be interested in sharing what you have done so far? You may have noticed that this repo has moved to GoodRobots. I'm working with a couple of people to release a full web GCS and companion computer configuration system under a MIT licence. MAVCesium will be rolled into that work eventually.