Or just consider those a place for storing musings of mad people :D
Ever since I've read Mr. Crosbie Fitch's article series "Cyberspace in the 21st Century" I've been taken with one of his ideas: "Scalable Fidelity".
Having a client/server software able to represent the world perceived by one's avatar on extremely wide range of levels, from text to fully ray-traced VR environments, just pushes many of my buttons (AI, lossy data compression, storytelling) :)
Just visualize a text mud-like game slowly transforming into 2D top down environment, into 2.5D isometric environment, and finally into fully realized 3D world, all depending on the amount of resources available.
Sadly, realizing this vision will likely require pretty strong AI techniques. Although one can imagine some interesting use of somewhat similar approach to content distribution:
If content creator adds enough semantic tags to the object he adds to the world, we can do some pretty strange and wonderful things :)
Let's say we have a house model:
Code: Select all
object_type->[house, mansion] size->large, [Bounding box] mood->[creepy,abandoned], description->"Old mansion abandoned years ago, all the windows are boarded, but strangely enough for this part of the city, there are no signs of homeless people anywhere near the property", concept_art -> [ a reference to a set of images follows ], model_2d -> [a 2d image of the house, top/front etc.], model_3d -> [ a 3d model description]
Given a very 'smart' client with pretty large cache of content, just a few tags and information about other objects in range would allow us to present new object to the player, and refine the presented model while the data is being streamed in.
Also those kind of tags can be very useful to storytellers looking for content to help them tell their stories.
OK, that's enough rambling for one post :P