TC24 / TC-24

TC24

image


TC24 is an implementation of binary represented timecode idea, which lets artists use ad-hoc timecoding of video frames, enabling referencing back to online after the creative process has finished.

The TC24 Project isn't finished yet, but an early toolchain has been established, so it is almost usable now, with future development planned. Text below is a loose description of the project, starting with the ideas and ending with technicalities.

DISCLAIMER

This will be only remotely interesting to some VIDEO ARTISTS who happen to be GEEKS. you have been warned


Okay, this may be a bit awkard for me, I've been thinking about it for too long.

DISCLAIMER: Following creative ideas are free. Software and visual content are released under CC (where applicable). Get inspided, modify, expand.



The basic idea is identical to an online/offline workflow when working in video. This goes back to the early days of Television.
When good quality tape storage was very expensive and worked mostly linear (no random access on tape, sorry), several ways were developed to overcome quality limitations. In first editing systems a human operator would punch in and out codes on the deck controller and then commit a task to a complicated mechanical system of two or three synchronized magnetic playheads, spinning in unison, to lift a bit of magnetic-field encoded of tape and write the same information onto the master copy.

Then when editing digital, non-linear video editing systems editors would first digitise thumbnails of each frame onto a hard-disk digital storage, then the operator would do the edit digitally, able to quickly preview many different combinations of shots, and make creative decisions without having to do the magic of levitating magnetic rings and tapes.


Fast forward to today. We have two approaches. Gaining popularity is the new way wchich relies hevily on intensive video compression. Shoot with your DSLR, store on cheap hard drives. On the other hand there is the top end, people trying to edit some top-level hollywood quality commercial content on their macbook on the plane. Those people will still use the online-offline method, editing on low quality proxies and exporting EDLs (XML) for later online, grading and compositing tricks.

Today consumer quality equipment is often better quality than used to produce many early masterpieces. Metropolis was shot on something much worse than today's cellphone
Back then it was an art of nearly magic, to be able to tell exactly the same story with each playback (and may I remind you that was way before any sound recording method have been invented). Today this is merely a technicality, as we are moving towards realtime video.


LANGUAGE / FUTURE

When there's more and more verbal noise, as our leaders wave hands and say difficult words. Silence will be their escape from the noise. I'm pretty sure people born in this century will communicate in a much different way than we do, and I don't mean just using an evolving, dynamically expanding language. As we are spiraling towards infinity, we will start using connection of much higher bandwidth. Human ear understand text at a modem rate, modulating audio frequencies. Very much enough to establish a channel of communication, but there's so much more out there.

We can already absorb information provided in visual form much quicker than we could just using sound.

The next step is learning to speak visual.

TECHNIQUE

Also an interesting tendency in computer development. Many years ago the expansion was mostly in terms of clock and single units speeds, as we watched frequencies skyrocket from kilohertz to megahertz, to hundreds of megahertz to gigahertz. And then it stopped. The machine decided that matter resists too much on a quantum level, so we started to multiply. Two cores sharing the same RAM. Than virtualisation with HT, millions of dollars of development of processing pipelones. Few years ago I laughed from the idea. Today you can have acces to almost 1024 cores with OpenCL (or CUDA by getting two Teslas). But we have 2011 and this power is only starting to be used in games, and for most people who are not game software developers, all of this is of not much use.

So for now we only have limited realtime capabilities. We need to sample and reuse. We will digitise and store, and be able to recall jpgs from our digital memory storage on the cloud. They will be able to recall an image they have never seen, because someone else has seen that image before and uploaded that image. Ok. Enough fast-forwarding. Back to here and now


VJing

To me, VJ'ing has always seemed as a form closest to the language of our descendants, next ocupiers of this planet at its transition point. When you VJ, you pick samples from your hard drive and match clips from your video library to suit the theme of the moment. Of course VJing has the downside of spending long hours in loud enviroments full of drunken people, but you have to pay the price ;) In reward you get an audience, something that no computer can simulate. Real people experiencing your images. For me personally, the biggest drag in visuals is good sync with the music. Sometimes a tiny little tweak of timing in the output (like cutting to clip B two frames later than on the previous bit) can make a lot of difference, and you can feel that it works not only for you, you can feel through the space, that theres at least three other guys ond one girl somewhere in the crowd watching.

Now of course nothing stops you from preparing exactly the same sync in your bedroom, using your favourite video editing software, but the feedback is slow. If you want to find that sweet timing you can easily become confused after few cycles through 'put that clip a frame earlier and make the second one start two frames later'. You can draw envelopes but there's nothing like hands on experience when playing a controler as an instrument .

I'm slowly starting to get to the point:
Despite all the amazing technology, in 2011 its still a bit difficult to VJ with full HD, even if we mean H264 compressed footage. You can if you try really hard you can but there arent many cheap ways of recording that output. And you might want to just jam and records for hours in hope of that good part.


ONCE AGAIN TC24

TC24 is a solution for that particular problem......


The goal is to be able to freely perform video mixing on low quality footage, with timecode encoded into each frame. Then you can take the recording of that performance and extract exact timing information - not only how were the controls positinoed, but which frame exactly was on screen in a given moment. Using extracted timecode you can perform an ONLINE of your recorded performance, refering back to original footage, whatever the source was.
I'm sure for many of you this is a non-issue. When using FCP or AE you can pretty much preserve the quality throughout the process, and you don't have to worry about it. But what if you want to use a more complex path ?

HOW TO REMIX AUDIOVISUAL CONTENT USING TC 24



Example: remixnig two short movies made by our friends into something new

1. ADD BURNED IN TIMECODE (a bit more here) Take your source, put it all on a single timeline by assinging absolute position codes to each frame you will use. I would often render bits of that timeline as samples. Or export the whole roll as low-res proxy with burned in timecode visible. Sometimes something silly low as 320x240. I try to keep my binary fields not smaller than 4px to keep things simple. I try to keep the codes in the same place throughout rushes used for a given project

2. PLAY with your favourite software / hardware combination display visuals - let's say you connect your laptop, a dvd player with a base audiovisual loop through a mixer. You rock the T-bar like there's no tomorrow. and you recorded the output. 3. EDIT the recording in your favourite editing software. We don't care if its low res, we don't care if its recorded with a camera of camera wall. In traditional ways you would be stuck with a composite video quality recording of the output at best.

3. AUTOMATIC TIMECODE RETRIEVE from the recording using binary number encoded in each frame with custom software - reads video files and exports an AE time remap sequence

5. ONLINE FRAMES which appeared in the mix replacing them with your SuperHiDef stereoscopic HDR material, leaving spontaneity of your performance intact.


FEATURES



SIMPLE AUTOMATIC READ OF TIMECODE
TIMECODE CAN BE READ AFTER MULTIPLE GENERATIONS AND MEDIUM CHANGES
GEEK FRIENDLY, HUMAN READABLE NATURAL BINARY CODE (grey code somewhere on the roadmap)



MORE



To learn more about timecode click on the link next to those words. Yes, you can see it.

improvements over TC16

   Tc24 is a second, revised implementation of th binary represented timecode concept (o. It allows user to adress 2^24 frames which equals to 186 (base 25) or 155 (base 30) hours of footage. Field order has been amended, the code now forms two rows with LSB in the top right corner proceeding down and left until reaching bottom right. This way a code usually forms a horizontal rectangle rather than two long lines. Also, additional bits can be added ad-hoc.
Is there enough addressing space?
While not enough for a whole video library or even a medium clip collection, it should be anough for many remix or creative projects, and if you need more, than you can use a second code to carry roll number, and than another and another. You can address of footage with TC64, which lets you address about 23397696694 years of footage . Of course in the future long hashkeys will be used to access content and address space of TC24 is relatively tiny compared to - let's say youtube, but its purpose is different. It's meant as an easy hackable solution, to create and cross reference frames within works, rather than provide a comprehensive global timecode matching solution.
What other shapes are possible?
participate