Computers can now imagine narration and dialogue based on video

A video segment inspires this computer program to create a sentence of narration based on what happens.
A video segment inspires this computer program to create a sentence of narration based on what happens.

Korean research team announced on Feb. 3 that they have successfully developed an imaginative computer program that can acquire information contained in videos and make up words or dialogue suitable for each scene shown on the screen. A research team headed by Jang Byung-tak, professor of the Department of Computer Science and Engineering at Seoul National University, entered the 1,232 minute long Korean animation Pororo into the computer program. They found that the program was able to teach itself to recognize scenes, lines, stories, and characters using associative memory that resembles a human brain’s neural network.

After entering specific scenes, the program can create dialogues appropriate for each character. The dialogues may differ from the original ones. It is also possible to see different versions created depending on whether 100 minutes or 10,000 minutes of the cartoon are entered into the program. The phenomenon is attributable to the possibility that the nature of characters may change as time goes by.

For full article, see Business Korea.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s