Look’s what’s breaking new ground in visual effects. Artificial intelligence has already disrupted VFX and machine learning algorithms are performing at par with a human editor. Earlier last year, AI edited film trailer – case in point is 20th century Fox’s sci-fi thriller Morgan that was edited with IBM Watson.
The modus operandi – researchers at IBM reportedly a) uploaded 100+ horror movie trailers into the supercomputer, b) the movie was processed by Watson in just 90 minutes to capture the right scenes to use for the trailer c) Watson culled out 10 scenes from the 90-minute movie and a human editor stitched up the resulting narrative. The result – a super creepy trailer and a shortened editing cycle. The editing process that takes from10-30 days was pared down to a day. Oscars and Artificial Intelligence have become inseparable.
While Morgan didn’t pick any awards, Rogue One: A Star Wars Story is definitely in the Oscars race with the film snagging two nominations – sound mixing and visual effects. What’s more, India-born, San Francisco based Kiran Bhat who alongside his colleagues Michael Koperwas, Brian Cantwell and Paige Warner pioneered the revolutionary ILM’s facial capture system has been awarded the Technical Achievement Award. He is among the 34 recipients recognized by the Academy of Motion Picture Arts and Sciences for scientific and technical achievements. Bhat and his team will be presented the award at the annual Scientific and Technical Awards Presentation on February 11 in Beverly Hills.
Kiran Bhat’s facial performance capture wins technical award for Rogue One
ILM’s facial capture system was also used to create the character Hulk in Marvel’s Avengers, turtles in TMNT and orcs in the Warcraft movie. Bhat, who holds a PhD in Robotics from Carnegie Mellon University and reportedly left ILM a year ago pioneered a system that enables facial performance transfer from actors to digital characters and at the same time allows full artistic control. In plain speak, Bhat explained that the facial performance-capture technology analyses an actor’s face and makes a 3D digital character that mimics the actor’s real facial performance. Case in point, Mark Ruffalo’s Hulk character in The Avengers was developed with this technology. Hulk’s facial motions were created by observing Ruffalo’s performance.
Bhat who has launched his own startup Loom.ai is all set to revolutionize VR by automatically converting selfies into expressive 3D avatars. These expressive faces can be further used to power applications in messaging, gaming and e-commerce.
Twilight star Kristen Stewart authors a Machine Learning paper
And if you thought AI could only be understood by geeks and nerds, you’re wrong. Hollywood actor Kristen Stewart, best known as the mopey Bella from Twilight series has reportedly co-authored a paper on artificial intelligence. Originally published at Cornell University library’s open access site, the paper is titled “Bringing Impressionism to Life with Neural Style Transfer in Come Swim”.
Submitted on January 18, and co-authored by American poet and literary critic David Shapiro and Adobe Research Engineer Bhautik Joshi, the paper highlights the style-transfer techniques used in her short film Come Swim. Stewart’s detractors dismissed it as another “high-level case study” and that “style transfer” technique have been a part of showbiz for a long time. Stewart’s short film Come Swim which is also her directorial debut tells a story about a day in the life of a heartbroken man through realism and impressionism. The film uses style transfer technique to build the story.
Stewart directed Come Swim, a short film about a man’s day told through both impressionism and realism. The film features the use of the so-called “style transfer” technique to build its story. According to Stewart’s research paper, “The film itself is grounded in a painting of man waking up from sleep. The paper has taken an artistic step by applying Neural Style Transfer to redraw key scenes in the movie in the style of the painting”. Her paper attempts to “outline a novel technique using convolutional neural networks to re-draw a content image in the broad artistic style of a single style image”.
AI predicts 2017 Oscars – Best Picture winner
Artificial Intelligence is at work in Oscars again. Cambridge-headquartered Luminoso, a text analysis and AI company that evolved from the MIT Media Lab has taken a shot at this year’s Oscars prediction. Whether they hold true, we are yet to see? According to Luminoso blog, here’s how the AI firm went about the forecasting business. “The team culled 84,000+ reviews from IMDB reviews over a period of four years, from 2013 to 2016 taking the top 50 most popular movies from each year. The team then applied Luminoso Analytics to analyze the text in those reviews and identify correlations between topics and the final Oscar nominees and winners,” reads their blog.
Popular terms used in reviews such as “stunning”, “cinematography”, “masterpiece”, “plot” and “visual” among others made for an Oscar-worthy movie. The result – an algorithm that cranked out the Best Picture winner and contenders based on reviews.
Here are Luminoso’s 2017 Oscars contenders:
They have conferred the Best Picture title to Natalie Portman starrer Jackie
- La La Land
- Hell or High Water
- Hacksaw Ridge
- Jungle Book
- Nocturnal Animals
- Manchester by the Sea
Let’s see how their prediction holds up. If not, Luminoso better gear up for a media blitzkrieg.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.