[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: www.humanmarkup.org
- To: idrama@flutterby.com
- Subject: Re: www.humanmarkup.org
- From: "Brandon J. Van Every" <vanevery@indiegamedesign.com>
- Date: Thu, 26 May 2005 15:34:45 -0700
- In-reply-to: <20050526145541.27623.qmail@lynx.eaze.net>
- Organization: Indie Game Design
- References: <20050526145541.27623.qmail@lynx.eaze.net>
- Reply-to: idrama@flutterby.com
- Sender: owner-idrama@mail.flutterby.com
- User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)
whitncom@lynx.eaze.net wrote:
Has anyone looked at http://www.humanmarkup.org ?
I'm not sure if its relevant to Interaction and Fiction, but maybe someone
else has some ideas?
I glanced at it. I'm a 3D graphics guy with a limited knowledge of
motion capture, both in terms of theoretical principles and industrial
practice. The range of human behaviors that one could 'mark up' is
incredibly vast. For instance, think of all the ways you could scrunch
up your face. Also there are zillions of body types that such behaviors
can be applied to, and the problem of scaling behaviors to fit different
body types is decidedly non-trivial. Thus, to be technically feasible,
a "Human Markup Language" is inevitably some kind of very limited
subsampling of what human beings can do. I haven't read the spec, but
I'd expect it to contain something about a "reference avatar" that
performs a lot of common movements that embody e-commerce problems. For
instance, looking in particular directions, moving the lips according to
some phonemes in a not-too-accurate way, waving, smiling, nodding,
pointing, etc. I would expect a Human Markup Language to be adequate
for putting a human face (or body) upon a UI. In other words, I'd
expect it to handle something like
http://www.oddcast.com/sitepal/?promotionId=1385&bannerId=134
I would not expect such a standard to encompass anything close to
dramatic subtlety. First off, the data and streaming requirements are
too heavy for XML. Second off, in industrial practice I don't think any
of the "high quality motion" stuff is that mature yet. Last I checked,
people were still getting bad input streams from their mocap that they
had to clean up tediously by hand. I think I last looked 2 years ago;
there may have been some progress since then. On the other hand, I just
looked at whether speech synthesizers are capable of dramatic inflection
and authoring, for purposes of film or game work. They're not. That's
"next generation" stuff and it may be 5..10 years before it's delivered
in any industrially meaningful sense. One guy at the Audio Engineering
Society laughed that this kind of problem has always been "5 years out"
forever. Which fits with my observation that for the vast majority of
businesses, anything that's 5+ years in the future is in the infinite
future, and nowhere near to getting done. Most companies simply don't
plan farther than 5 years in advance. They wait-and-see whether someone
else is going to make progress on something, and take some of the
initial lumps.
Cheers, www.indiegamedesign.com
Brandon Van Every Seattle, WA
When no one else sells courage, supply and demand take hold.