Big images
2005-01-15 00:07:00.578229+00 by
Dan Lyke
10 comments
Related to my comments about what I'm working on, my next trick is to do some real time graphics with a big texture map. No, I mean roughly 75k by 91k, or just shy of 7 billion pixels. It's in a "MrSID" format right now, I've been running the free converter to TIFF from LizardTech since early this morning, and it's 10% done. And then I've got to do something with the resulting image. Next week is going to be interesting.
[ related topics:
Work, productivity and environment Graphics Sports Maps and Mapping Archival
]
comments in ascending chronological order (reverse):
#Comment Re: made: 2005-01-15 04:58:21.032214+00 by:
ebradway
Someplace around here I thought I had some specs on MrSID. So you're playing with satellite imagery... Are you doing feature extraction or just visualization?
What's sad is I spent much of my time "in class" today teaching my peers (graduate students in GIS) the difference between a TIFF and a JPG. Someone actually renamed a .JPG to .TIF in WinXp (which, of course, just called it .tif.jpg).
I've been waiting until I get far enough into remote sensing classes to actually get to start doing pixel-wise code - still light years away. I take my first graduate RS class this Summer but until I get to my PhD studies.
What amazes me about feature extraction is how the science tends to be far enough behind the technology that alot of up front effort goes in and the brute force methods get ignored.
#Comment Re: made: 2005-01-15 05:03:49.771178+00 by:
ebradway
Ok. Just read the other post. You're doing visualization. Are you draping the MrSid over a DEM or are you just doing it all 2D? Are you doing all this from scratch or are you converting to VRML and using a viewer? There's lots of good baseline code out there, mostly from military sources, for creating visualization environments. Let me know if you want some pointers.
#Comment Re: made: 2005-01-15 06:46:36.927944+00 by:
chaosmosis
Virtual war-mongers, you! ;-)
#Comment Re: made: 2005-01-15 17:45:59.470622+00 by:
Dan Lyke
[edit history]
Right now just visualization. Draping textures over DEMs.
From my lofty perch, VRML isn't much of a technology for anything, even though it sees a lot of use in military applications. I tried a bunch of the brute force methods for visualization, but Ben Discoe has his VTP software over at VTerrain.org, for which he's got relatively easy to use libraries (set up an OpenGL canvas, call his stuff, and he uses Open Scene Graph, so it's super easy to slap in other objects, or even use Cal3d for character animation), and he's got applications which handle data all the way through the art path, so it's easy to poke in at one place and say "give me the data from there".
As I get more stuff in the scenes I'm going to have to choose and start tweaking one of his LOD algorithms to handle building and vehicle positioning better (combining polys can make your building flood...), and probably try to do some intelligent things with terrain occlusion and elements in the scene graph.
Feature-extraction wise, I still don't have a feel for what level of error is acceptable, and until I get that everything is going to be user specified...
chaosmisis: Grin. Long haired hippy types working on military applications was the first thing I had to come to grips with.
#Comment Re: made: 2005-01-15 17:54:58.925205+00 by:
Dan Lyke
And, oh yeah, I've no idea how I'm going to go about dropping 20 gig of texture on to anything, that's going to require some further development on top of Ben's stuff.
#Comment Re: made: 2005-01-15 18:25:12.122554+00 by:
ebradway
Sounds like you're way ahead of me when it comes to tracking down existing libraries and code. Good deal. There's a couple decades worth of good stuff out there. I didn't want to watch you reinvent too many wheels.
I would say the scope of your textures is really where your challenges lie. And I assume you're going to want to work with imagery that may only be minutes old, so you can't do too much precomputation.
If I remember right, doesn't MrSID store multiple resolutions (scales) of the same imagery? The idea is that you are never going to be able to see 20gigs of texture information at one time. As you said, the image size is on the order of 75Kx75K pixels. Using a display technology capable of, say, 250dpi, to see the entire extent of your image you'd need 25 ft x 25 ft. So you store your image in a B-tree of sections of the image and resolutions. Your resulting B-tree my be about 2X the size of the original image, but you should be able to traverse the tree quickly (you only need image information that is either next to the current node or above or below in resolution).
But I'm just ruminating. The real trick comes in breaking the data into the tree, when you just pulled the latest imagery off the satellite and want to start making decisions immediately.
The only other answer: RAM and lots of it! 32-bits of address space is beginning to seem as much of a joke as 16-bits was... Much less the whopping 10-bits we used to get in DOS.
#Comment Re: made: 2005-01-15 18:32:04.585047+00 by:
ebradway
Oh yeah, one other thought:
Alot of geoviz lends itself well to massive multiprocessing. Maybe what you really need to do is subdivide the geography and farm it out to lots of separate systems. Then use some signal processing to try to blend the output of the machines together. The problems of data size essentially go away and your left with some really crazy work to try to put it all back together. And a mess of systems administration.
As far as the blending goes, it seems a little crazy until you stop and think again about what you are ultimately doing: projecting a sphere onto a flat surface. You're already doing alot of transforms just to maintain some semblance of accuracy. Why not use just take the transforms alot farther.
#Comment Re: made: 2005-01-16 01:13:56.391927+00 by:
Dan Lyke
The storing sub-resolutions as a tree (often a quad tree) is common ("MIP mapping"), in fact VTP attempts to do that on the fly if you pass it one big texture. As often happens on such things, cache algorithms are, of course, the place for enhancement there.
I think the hope has been that this application will run on fairly lightweight computers, so the "drop this on a huge renderfarm" thing isn't going to happen. What I really need is to have some discussion about application needs and use cases: My personal opinion is that it does us no good to be showing tread patterns in tire tracks if we're only tracking to GPS resolution.
#Comment Re: made: 2005-01-16 22:07:40.783755+00 by:
ebradway
it does us no good to be showing tread patterns in tire tracks if we're only tracking to GPS resolution
Sounds like the same issue I had when I made sports games and the competition started texture mapping actual photos of the players in the animated characters in the game. You only get about 10x10 pixels at best and all of the characters are based on the same motion capture and are essentially the same size and physical build. I thought it would just look weird. But now, seven years later, and everyone texture maps photos of players on the game characters.
People like to see detail even when it's not accurate. Even worse, they think that because they can see detail, that the accuracy of the data is to that detail level. In geography this plays hard on sense of place. Most people give directions using landscape details, not physical coordiantes. When you look at an image on screen that has recognizable geographic features, people naturally assume that coordinates are correct.
You're not designing a missile targetting system, are you?
And you're not going to be throwing around 20GB textures on lightweight hardware anytime soon!
#Comment Re: made: 2005-01-17 18:58:29.718038+00 by:
jeff
I couldn't agree with you more about the need to discuss the application needs of your users. That will help you tremendously in terms of setting technology boundaries and parameters for implementation--especially given the limiting resolution of GPS.