Velocidevorium is on Steam Greenlight

January 31, 2016 by Terry

Velocidevorium is coming along smashingly. So much so that I decided to put it on Steam Greenlight. If enough people visit the Greenlight page and vote for it then it will be possible to publish it on Steam at soon as it releases.

There’s no word on the release date yet since Velocidevorium is still in alpha. But I plan to keep building it up and adding features and game modes until it’s ready for release. Getting it greenlit on Steam also makes it possible to publish it in Early Access, which is something I’m still considering.

2D Viewer Apps Are Not Virtual Reality

January 4, 2016 by Terry

If you have gotten your hands on a Google Cardboard or Gear VR or similar virtual reality (VR) headset, you likely followed up by downloading a bunch of VR demos to your smartphone. Common types of demos are 360-degree photos, 360-degree videos, and scenes rendered with 3D graphics. Unfortunately, much of the 360 photo and video content is a feeble pile of me-too apps. You can find some reasonable theories about the nature of the problem, such as the poor field-of-view or poor framerates and motion sickness. However, I have not seen anyone asserting the biggest problem of all: much of the photo and video content is 2D.

Nearly any definition of VR that you can find requires the simulated environment be made to appear real. So any hardware or software that doesn’t bother to simulate 3D stereo vision, the way the vast majority of sighted humans see the world, simply isn’t VR. The main offenders in this category–and the focus of this article–are some image and video viewers. Apps that draw everything with computer graphics are generally better because they almost all have a quality 3D effect, though some of them still find ways to strain your eyes. You could also reject 360 photo and video viewing based on their lack of interactivity, but let’s stick to the 3D deficiencies for this article.

2D vs. 3D

Figure 1 shows what a smartphone screen should look like when content is presented properly to generate a believable 3D effect. Figure 2 shows what many VR apps actually look like.

Figure 1

Figure 1. A 3D image pair, photos taken 6 centimeters apart just like your eyes. Note that you can only see the yellow side of the cube from your left eye.

Figure 2

Figure 2. A 2D image pair, two crops of the same photo.

Do you see the difference? Both figures contain images for the left and right eye that look slightly different. Figure 1 contains images taken from two different locations, simulating the parallax that your eyes capture in real life (you can only see the yellow side of the cube from one eye). Figure 2, on the other hand, contains two views of the same image. The views are framed with a small horizontal shift relative to one another in an attempt to prevent double-vision and make everything look very far away. If the horizontal shifting is done poorly, the user may be required to go cross-eyed or wall-eyed (the opposite of cross-eyed) for his eyes to converge on the images. Many users will not figure this out and see a confusing (or painful) double-image. Either way, the 2D image pair does not simulate the way your eyes actually see the world.

The Scattered State of Mobile VR

So why do so many app creators present 2D content in VR headsets that are designed for displaying 3D content? One possibility is that they want to make 3D content and have no idea what they are doing. Another possibility is that they are simply too lazy to make the content correctly. Yet another is that they want to ride the latest VR craze with me-too apps and think their viewers are too stupid to spot the difference between 2D and 3D. The main reason to show 360 imagery in a VR headset is so your phone can track your head rotations and display different parts of the imagery accordingly, but when the imagery is 2D your VR headset becomes little more than a magnifying glass. You may be better off holding the phone in front of your face without a headset so you can enjoy the full resolution of your screen.

Some 2D apps, such as NYT VR, at least attempt to show their imagery at a proper field-of-view for each eye. A Google Cardboard-style headset usually has a QR code on it that can be scanned by the Cardboard app. NYT VR uses this information to calibrate the imagery on your smartphone for your particular headset. Google even provides a QR code generator so you can calibrate your headset if the manufacturer didn’t already provide a QR code. NYT VR received criticism for giving people double-vision. This was likely due to phones not being calibrated for the correct headset.

Worse are apps that ignore or misuse the information provided in the QR code. RYOT – VR and Ascape Virtual Travel & Tours lazily present two identical images without even compensating for lens warp. With these you’ll need to go cross-eyed to view images that appear warped. Even worse is Vrideo, which does correct for lens warping but makes you go painfully wall-eyed by applying too large a horizontal shift to each eye’s view. (I used a 5-inch screen for all these tests. Your experience may be better or worse with a different screen size.)

If you are interested in seeing the VR industry take off, you should be bothered by all of this. Most people’s first experience with VR will be with a mobile headset. If the first app they see shows wall-eyed 2D imagery, they will likely dismiss VR as underwhelming or even annoying.

On the bright side, there are some apps that present 360 video in 3D. Jaunt VR and Vrse – Virtual Reality provide video with the proper parallax to simulate stereo vision, the way we actually see. Cardboard Camera will let you take 360 photos that also have a believable 3D effect.

The Displaced is a video shown in 3D on Vrse and 2D on NYT VR. I don’t know why NYT VR takes quality content and reduces it to 2D. Perhaps the creators of NYT VR wanted smaller downloads, or perhaps they think their viewers won’t notice. In any case, if you’re going to call it VR you can at least attempt to make it feel real by using 3D imagery.

Wrap-up

The best 3D you’ll get in a VR headset will be done with real-time computer graphics because every image is rendered for your exact eyepoint, allowing you to rotate your head into any orientation. 3D photos and video are a good alternative if they accurately simulate stereo vision. The 3D effect is baked into the imagery instead of computed in real-time as with computer graphics, so it will generally only work correctly if you keep your head level and look toward the horizon in any direction. Again, try Jaunt VR or Vrse – Virtual Reality for good examples. Lastly, 2D “Virtual Reality” is a cop-out forged by charlatans. If someone tries to show you 2D VR, do him a favor and show him some quality VR.

The state of VR articles and reviews is as random as the quality of smartphone VR. Top-10 lists are overflowing with all the apps mentioned in this article, including the bad ones. If you’re a journalist, please take the time to understand the differences between quality content and all the me-too nonsense. Maybe the situation will improve after the release of several high-end headsets (HTC Vive, Oculus Rift, PlayStation VR) in 2016, but most people’s introduction to VR will probably still be with hit-or-miss smartphone VR.

(This article has been Android-centric. If you have related experiences with iOS-based VR, please describe them in the comments.)

Making Cubemaps from Overlapping Photos

September 24, 2015 by Terry

Cubemaps are used heavily in games and other computer graphics applications. There are endless rendering techniques employing them for background imagery, reflections, and lighting. You can make a cubemap purely in software by rendering six different views from one viewpoint. This can be done in a modeling program, such as Blender or 3ds Max, or in real-time during a game’s execution. Alternatively, you can make a cubemap by combining photos that view every possible direction from one point in space.

What follows is a recipe for combining photos to make a cubemap. I did all this work in Linux, but it is performed with free software that you can also find on Windows and OS X. So if you aren’t a Linux user you may still be able to adapt this recipe to your choice of platform.

The incredibly short version: Take overlapping pictures in every direction from one point. Use Hugin to stitch them together into a 360×180-degree equirectangular panorama (imagine the surface of a globe flattened out into a rectangle). Use PanoTools to convert the panorama to a cubemap.

There may be smartphone apps or other software that can stitch 360×180-degree equirectangular panoramas like the one shown in Step 4. However, I haven’t found any that can match the quality of Hugin. If you can produce such an image that you are satisfied with, you can skip to Step 6.

1. Install software

Install Hugin and PanoTools. Hugin is an outstanding program for stitching images together to form panoramas, and it has a powerful graphical user interface. PanoTools is a collection of programs that aid the creation of panoramas. I’m continually amazed by some of the free software out there.

On Fedora:
> yum install hugin
> yum install panotools

On Ubuntu:
> sudo apt-get install hugin
> sudo cpan
> install Panotools::Script
> quit

<soapbox> For any non-Linux users reading this, take note of how easy it is to install free software on Linux. No downloading installers, and no unchecking boxes that try to covertly install spyware on your computer. One of my favorite aspects of Linux is how easy it is to explore a new concept. You can go online to read about the top five tools for a particular task, and then you can quickly install them all to try them out. You can find a lot of good software in the default Linux repositories. In this case, however, PanoTools is more exotic and I had to get it from CPAN on Ubuntu. </soapbox>

2. Collect images

You will need a digital camera for gathering images. Your images should cover absolutely everything you can see from one location, and they should overlap so that the software described in the next section can find similar points on each image and stitch them together. A good rule-of-thumb for the amount of overlap is about one third the image width.

The camera I use has a pretty wide-angle lens (24mm equivalent), so about thirty images is usually enough. If you have a narrower field of view, you will probably need more images. A tripod helps a lot to control the yaw and pitch of your camera as you work your way through all the capturing.

images

As you can deduce from the thumbnails above, I took four rings of images at four different pitches and one image straight up and one straight down.

The sky can be a problem because, if there aren’t enough clouds, you and Hugin will not be able to understand where you images overlap. It’s not impossible to stitch images of a cloudless sky, but you will need to specify control points when Hugin cannot choose them automatically, and they will be hard to choose even for a human. Remembering which way you were pointing the camera for each image will help a lot in this scenario, and following a consistent pattern when you gather images will help you remember.

Even when there are plenty of clouds in the sky, you can have problems because clouds constantly move and change shape. The slower the clouds are moving and the faster you collect your images, the easier it will be for Hugin to figure out where your images overlap.

3. Stitch images with Hugin

This is the most involved step. Hugin is wonderful, but, like any meaty program, it will require some patience and practice if you want to be proficient at it. Rather than repeat all the minutia that you can find in other (better) Hugin tutorials, here is the short version of what you need to do to make a cubemap (or other panoramic images, for that matter).

3a. Load and align images

After starting Hugin, press the “Load images…” button and select all your images. You should see the main graphical view fill up with your images, possibly positioned and warped into Hugin’s best initial guess at how they should be arranged for a panorama.

Next, press the “Align…” button. Hugin will now take some time to search for control points (points that match between pairs of images). By the end of this step, you should see something like this:

panorama_preview

3b. Add control points

The source images I used gave me a little trouble for both of the usual reasons: there were not enough clouds in the sky and the clouds changed too much over the duration of my capturing. For these reasons, Hugin popped up a window with the following message:

Warning 3 unconnected image groups found:[0, 1, 2, 3, 4, 5, 6, 7, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [8, 9, 10, 11, 12, 14], [13]
Please create control points between unconnected images using the Control Points tab.

After adding the points, press the "Align" button again

If this happens, do as instructed and find Hugin’s Panorama Stitcher window where you can add control points of your own (Hugin will probably open this window automatically for you).

panorama_stitcher

Manually adding control points is the most time-consuming part of stitching images, but it is often necessary. Once you think you have enough control points between all adjacent pairs of images (three or four is usually enough), press the “Align…” button again. You might need to repeat this process to get things aligned well enough.

And don’t forget to save your work occasionally.

3c. Align your horizon

Once you get Hugin to stop complaining about unconnected image groups, go back to the Fast Panorama preview window and select the Move/Drag tab. By dragging the main image different ways, you should be able to get your horizon line to go straight across the image. Here is what you will see when your horizon is NOT lined up properly:

wobble

3d. Stitch

Go back to the Panorama Stitcher window and select the Stitcher tab. The important settings here are “Projection” = Equirectangular and “Field of View” = 360 x 180. If you have prepared enough images and connected them with enough control points, Hugin will already have these settings figured out.

To make a high-resolution cubemap, say, 1024x1024x1024, you need to start with an equirectangular projection with enough resolution. 6000×3000 is plenty, so you can set the Canvas Size accordingly. It is good to not go overboard on these numbers. Hugin can easily produce gigantic images that consume a lot of hard drive space and are slow to edit.

Once you have everything set the way you like, press “Stitch!”

4. Edit the equirectangular image

If all goes well, Hugin will produce a stunning panorama of your scene, complete with severe warping at the top and bottom of the image. You will probably want to remove shadows of yourself and your tripod and maybe tweak the colors, so load the panorama into your favorite image editing program, such as GIMP or Photoshop. It is difficult to edit anything at the extreme top or bottom of this image because it is so warped. Instead, you might want to wait on those parts and edit the top and bottom cubemap images. When finished, you should have something like this:

panorama

Look closely and you can see that someone wrote “I LOVE YOU” on the ground with his shoe. Isn’t that cute?

5. Panini (optional)

This step is just for fun, so you can skip it if you want. Panini is an image viewer that can load your equirectangular projection. It wraps the image around your eyepoint and allows you to look in any direction.

On Fedora:
> yum install Panini

I haven’t found an easy way to install it on Ubuntu yet. Please tell me in the comments if you know of one.

6. Convert equirectangular panorama to cubemap

Finally we make the actual cubemap. It would be convenient if Hugin could produce cubemap images for us, but it doesn’t have built-in functionality for that. Maybe it will someday.

Instead, we use PanoTools’s erect2cubic script to create a PTO file describing cubemap parameters and feed that file to Nona so it can create the cubemap. Nona is a stitching program that is installed along with Hugin. You’ll want to do something like this:

> erect2cubic --erect=yourpanorama.tif --ptofile=xxx.pto --face=256
> nona -o xxx xxx.pto

You can replace the xxx parts with another name if you like. To make my life easier, I made the following script that produces only the final cubemap images and nothing else. (This script also uses convert, which you can get by installing ImageMagick.)

#!/bin/sh

# check if called properly
if [ $# != 2 ]
then
echo "Usage: $0 "
echo "This script converts a 360x180-degree equirectangular panorama to a cubemap. Each image is named starting with ."
exit 0
fi

# save command line args in variables
panorama=$1
prefix=$2

# create .pto file
erect2cubic --erect=${panorama} --ptofile=${prefix}.pto --face=1024

# convert panorama to cubemap faces
nona -o ${prefix} ${prefix}.pto

# convert and rename cubemaps so they are named more descriptively for OpenGL
convert ${prefix}0000.tif ${prefix}_posz.jpg
convert ${prefix}0001.tif ${prefix}_posx.jpg
convert ${prefix}0002.tif ${prefix}_negz.jpg
convert ${prefix}0003.tif ${prefix}_negx.jpg
convert ${prefix}0004.tif ${prefix}_negy.jpg
convert ${prefix}0005.tif ${prefix}_posy.jpg

# remove intermediate files
rm ${prefix}.pto
rm ${prefix}0*.tif

After you use erect2cubic and nona you should end up with your six final cubemap images.

cubemap

You might see some pinching at the center of the top and bottom images, but it has been really subtle in my cubemaps. You can easily fix it with GIMP.

Wrap-up

If you want to get fancy, Hugin will also do exposure fusion, which I mentioned in a previous blog entry. Next time I make a cubemap I want to try taking multiple exposures of each image and fusing them so the dark areas are brighter and the sun is actually visible instead of appearing as a giant over-saturated glob of white in the sky.

That’s all. Have fun making cubemaps with free software.

Visibility on Steam

August 13, 2015 by Terry

As a game developer, you’re probably always looking for ways to show your game to more people. A small fraction of those eyeballs convert to sales. If Steam is distributing any of your games you’re likely familiar with Steam’s Marketing & Visibility page, accessible after logging into Steamworks and clicking on one of your games. If you’re not familiar, go study it.

Steam gives you five Product Update Visibility Rounds during which they will display a link to your store page on their homepage. This link gets displayed 500,000 times each round, and you can start a round any time you make a game update and a related announcement.

I had just posted a free demo of Retrobooster on Steam. It’s not exactly a game update, but it was enough for Steam to allow me a round of visibility.

Retrobooster’s launch visibility (1,000,000 views) lasted a little over an hour, similar to Burak Tezateser’s experience with Monochroma as he describes in Dynamics of Steam as a Sales Platform. I falsely concluded that a round of 500,000 views would last a half hour or a little more.

Retrobooster is an English-only game and has made three quarters of its Steam sales to players in North America and Western Europe, which gives me a decent picture of its current audience. Looking at Steam Download Stats, you can take an educated guess at when to start a visibility round.

steam_bandwidth

I sampled these charts on a few different days and decided to start my visibility round at 11am. It was over quickly.

steam_visibility

But not as quickly as expected. My visibility round of 513,051 views lasted almost two and a half hours. In retrospect, this seems reasonable. Each game gets one shot at launch visibility, but five chances at update visibility. There is an equal number of New on Steam game links and Recently Updated game links on Steam’s homepage, so it makes sense for a link to appear in the New on Steam section several times more frequently than in the Recently Updated section. If I start another visibility round I’ll aim for about an hour earlier.

Still, the time I chose for the visibility round appears to have worked out well since it resulted in 50% more clicks than the average. It remains to be seen if the Retrobooster demo will help sales. Maybe that’s a topic for a future article. I’m interested to hear other developers’ experiences with marketing on Steam. Don’t hesitate to leave your story in the comments.

Snake Game Prototype

February 24, 2015 by Terry

This little prototype began as an idea for a twin-stick shooter in the vein of Super Stardust HD. But while the action in Super Stardust HD takes place on a sphere, I wanted to see if similar gameplay was possible on implicit surfaces.

The idea was that the movement of the surfaces would supply a constantly changing play field. The way the primitives that compose the surface break apart and rejoin supplies a new mechanic to aid in attacking or escaping from enemies. Jumping could also be implemented as a means of getting from one primitive to another.

I already had an old implicit surface library that I made years ago. That took care of drawing the surfaces. On top of that I added some basic navigation and shooting of projectiles.

A great looking enemy for this sort of twin-stick shooter would be a snake or worm, reminiscent of Centipede. And, when prototyping, it’s good to try to implement the most difficult parts first. Such an enemy was the trickiest thing I could think of programming in this environment, so I got to work.

It wasn’t long before I decided this implicit surfaces environment would be a good platform for a collection of mini-games, not just a twin-stick shooter. One obvious possibility is a snake game where you eat food, grow, and try to not run into your ever-lengthening body.

As expected, making a snake move the way I wanted was hard. This video shows one of my first attempts. Notice that the body slides around a lot.

Sliding randomly across the surface, the snake would be too unpredictable. To make such a game playable, and to make the snake look alive, it needed to slide along one unchanging path. Of course, the path would change a little because the surface is changing, always stretching or compressing underneath the snake.

I had programmed the necessary basics for my twin-stick shooter idea. For example, I could already differentiate between an entity’s velocity and the velocity of the surface on which it resided. So I set about solving the problem with a slew of vector math tricks to try to preserve the snake’s curves. I kept making so many corrections to the movement that the code became overcomplicated and hard to think about. I even considered laying unmoving points on the surface that could be used as control points for some type of spline and having the beads of the snake body follow the spline, but this failed too many thought experiments.

The final solution is a two-step process. First, create a polynomial curve between each pair of beads. This curve is defined by the positions of each bead in the pair and a center point. The center point is a weighted average of points indicated by various beads’ positions. The weighting is carefully tuned so that the snake will hold its shape.

snake_poly

Second, a target position is found for each bead somewhere along its polynomial curve. Then its velocity is modified to bring it toward that position. It isn’t brought all the way to that position so the snake will feel more flexible and alive (this is an art project, after all (we don’t need cold, mathematical precision)).

Notice in the video how the path of the snake holds its shape. If you adjust the polynomial weighting one way, curves in the path collapse into straight lines. Adjust it the other way and curves expand.

The next video shows the addition of food. When the snake eats, it gets longer. There is also collision detection, so if the head runs into the body the game resets. The collisions also keep parts of the body from passing through one another. When the surface changes enough to cause different strands to intersect, the collision detection causes one strand to rise off the surface a little and pass over the other. The camera has been modified to face the same direction as the snake’s head and tilted back a little to help the player see what dangers lie ahead.

At this point, it’s actually a playable game. Of course, there is no interface in the video, but you can see the length of your snake printed in the console where you executed the game. This is only a prototype of the difficult programming issues. To make it better you could add fancy graphics, levels, power-ups, multi-player, and anything else you can dream up.

Math is hard.

Retrobooster now has a proper trailer for its impending release on Steam. It will be available to buy there on Friday, July 11.