Going 3D?

You may have noticed that most of the videos thus far are visually a little… flat. The reason isn’t because we are lazy. Well, being lazy isn’t the ONLY reason. There are several reasons static images of us are used. First, neither of us can animate. Never have, plus it would take an incredible amount of one resource we don’t have to do it. Time. Aside from having to learn something totally new, animating takes a lot of time. Where does that time go now? Aside from corrupting the world, preparing for the apocalypse, and watching silly cat videos (all wrapped into the label called a “full time job”), the largest amount of time by far goes into editing. If video creation is broken into scripting/planning, voice over, editing, and publishing, editing takes by far the largest amount. When I say it takes over 90% of the total time, that is no exaggeration.

First, there’s the audio which is done through Reaper. Each voice over is made through separate tracks which are brought into the main video editor, Davinci Resolve. From there the tracks are sifted through to get the best takes and then they are cut up and placed to give the correct timing and audio adjustment tweaks are done. Sounds fairly simple, and it is. But this alone takes a lot of time. Longer than it takes to makes the actual audio. On the first few videos I made the decision to use speech bubbles on our lifeless bodies to give some sort of visual stimulation to our breathtaking voices. It is fairly simple as well, but even then this ended up taking an incredible amount of time. Now add in the fact text had to be added for what was being read. All told, it would take about 6-8 hours just to make one 5 minute video.

It wasn’t until watching on someone else’s device that I realized that the speech bubbles may not be necessary. They had closed captioning on by default. Since I upload the script for CC anyway, the impact of the speech bubbles is greatly reduced. I thought about using some sort of audio visualizer to add some spice. I’d seen something like that when some more famous YouTubers had something like that when live streaming. I searched all over the damned place, but couldn’t find what they used. I found something that might work with Kauna, and played around to get to work and look right. I had to get the audio from the final video right, export the audio, run it through Kauna while recording video through OBS, and bring that video back into Resolve. I then had to figure out how make it look better than just a smaller video placed within the main video. Through some editor magic, I was finally able to get the desired result which was making the black of the Kauna video transparent so it looked nicer. This is what you saw in the Kama Sutra 1:1 video. If this sounds overly complex, it probably is… but it was still a LOT more time efficient than making the speech bubbles, especially for the KS 1:1 video which has a ton of dialogue.

Frankly it’s all bullshit, but I love to create. I want to be able to deliver the best I can within the time constraints that I have. And it bugs me that the videos have to be for the most part static images.

Enter the 3D possibility. It wasn’t until watching a certain video from Corridor Crew (great channel, check them out), that I got an idea that may be possible to add much needed life into the videos. Through the LIDAR on the iPhone and a program, there is a way to motion capture the face and bring that onto 3D models. These can then be brought into Unreal Engine, and I can render scenes to make the videos. My mind (which is incredibly sexy) has been swimming with possibilities for what could be done for future videos. The initial result, however, would be that I would finally be able to bring more visual life to these videos without having to need a team of minions to make this happen. In theory, this should possibly still be quicker than the speech bubble approach.

Now for the downsides. As with any new software and approach, this is going to take time to learn. I will have to learn the program, and have to learn how to use Unreal Engine. I will also have to learn how to do the motion capture and figure out how to do the performances to make it work the way I want to. I then have to learn how to make the tweaks in the software(s) to make it work. Next is the cost… the software isn’t cheap, and since I don’t know how to 3D model, I’ll have to get someone to make models that will work. The models alone could be several thousands of dollars. Then there’s the hardware… is the GeForce 1660 super going to be enough, or do I have to get a better video card? You’d think money wouldn’t be a problem for the dark lord, but it gets hot down here and piles of cash tend to go up in flames. And lets not forget, this is all still theoretical… I don’t know for certain that this will all work.

Nevertheless, this possibility is being extensively pursued. The upside (if this will work) will be worth more than the downside. The main story telling videos will be better, as they don’t have to be written for “radio” and can be written with more visual flair in mind. Other possibilities open up as well, such as different formats and styles. I’m not promising a time frame, but you will know it when you see it if it happens.

Previous
Previous

Diving deeper with Unreal Engine 5

Next
Next

DevilByTheTale.com officially launches