For better or worse, Artificial Intelligence (AI) is a reality and already has a lot of experience in cinema.
From the classic great assembly machinery to supercomputers with incredible human-like operating systems and robots, the developments of this century have changed our lives immeasurably and, judging by the pace of these developments, we can safely say that we have only seen the beginning.
Generally in the movies, AI plays the role of the faithful robot that obeys the hero (the Star Wars robots, for example), the villain who wants to destroy humanity, as in Transcendence (2014), or both: the technology created to serve he turns against his masters: Terminator (1985), Matrix (1999), I, Robot (2004), etc.
Also Read: How is AI Being Used in The Banking Sector?
But AI doesn’t just seek to destroy the world in fiction. The AI of the hand of the automation and the robots can help human beings in the world of the creation and edition of films and videos.
AI and Video Editing
To mention something basic, in the Photos application, the first subtle signs of this can be seen on the iPhone through the “Memories” function. Memories can generate a fascinating video automatically. Once a picture is chosen, the customer has the choice to make it shorter or longer and provide “emotional direction” to represent something pleasant, edifying, epic or sentimental that will influence the tone and editing.
Also Read: Responsible AI: Top 4 Practices to Achieve Responsible AI
You can see more advanced examples of AI video editing on the Wibbitz page, which can automatically generate videos based on the text being fed in a matter of seconds.
Wibbitz technology is based on algorithms that analyze the text of an article, extracting the most interesting information and converting it into a video using NLP (Natural Language Processing) technologies. These videos show outstanding phrases, images, infographics, and everything that attracts the most attention of the analyzed text.
Websites that cover breaking news like CNN and Mashable use this service to create content to increase their written stories for users who prefer to watch a story then read the article.
Magisto is another website that automatically creates edited videos based on user-loaded content, which requires the user to choose an emotional address and then provides them with the ability to make changes by customizing time, transitions and effects.
AI as Director and Film Editor
IBM supercomputer Watson helped a video editor produce a trailer for the Morgan thriller in Hollywood in 2016. IBM scientists started the process by ingesting more than 100 horror movie trailers in Watson to evaluate sound and visual component patterns, which enabled Watson to determine what characteristics they produced to a vibrant trailer.
Watson then produced a list of ten scenes from the movie, with a total of six minutes, which determined they were the best for the trailer. A human editor took six minutes of filming and gathered the selected filming in a coherent story. The process of creating a trailer for a movie usually takes between ten and thirty days; with the use of IBM Watson, the editing time was reduced to twenty-four hours.
A film called Impossible Things, which presented its script written by both AI and humans, took things a step further in the filming process. The AI officer evaluated the information to determine which twists of stories, premises and plots best would resonate with the viewers ‘demands. As a super intriguing fact, the AI agent determined that a bathtub and piano scene was specifically necessary to make it resonate with the target audience.
Sunspring, a completely different film, debuted more or less simultaneously with the unique credential of having an AI agent named Benjamin (also known as the “automatic screenwriter”) writing the whole script without human intervention. To accomplish this, Benjamin was fed dozens of science fiction movies and television scripts, such as Futurama, Star Trek, Stargate SG-1, The Fifth Element, and Ghostbusters. The short film was created with human actors, filmmakers, and editors who followed the script of the AI. The result, as expected, was an uncomfortable and unintelligible film that shows the current limits of AI in creative writing production.
Have you heard of KIRA? KIRA is a robot arm developed by Motorized Precision that is rapidly changing the game to get smooth, precise and very complex kinematic camera movements. Over the past three decades, computer-generated images have transformed the way many movies and television shows are made. However, the creation of digital effects is still a complex and very tedious process. For every second of a movie, an army of designers can spend hours isolating people and objects in unedited recordings, digitally building new images from scratch and combining them so that editing is not noticed. Array develops systems that can execute some tasks of those processes. The company’s founders, Gary Bradski and Ethan Rublee, also created Industrial Perception, one of several emerging robotics companies that Google bought several years ago.
Backed by more than $ 10 million in financing from various companies — such as the venture capital firm of Silicon Valley Lux Capital and SoftBank Ventures — Array is part of a widespread effort undertaken by both industry and academia and is dedicated to building systems that can generate and manipulate images autonomously. Thanks to improvements in neural networks – complex algorithms that can learn tasks by analyzing vast amounts of data – these systems can edit the noise and errors present in the sequences, apply simple effects and create very realistic images of fictional characters or help put the head of one person in the body of another.
Adobe, which creates many of the software tools designers currently use, is also exploring computer learning that can automate some similar tasks. In Industrial Perception, Rublee helped develop the computational vision of robots designed to perform tasks such as loading and unloading cargo trucks. Shortly after Google acquired that company, work on neural networks intensified. In almost two weeks, a team of Google researchers “trained” a neural network that surpassed the company’s technological performance. Rublee and Bradski collected a decade of rotoscope material and other visual effects work from several design studios (they didn’t want to specify which ones).
Besides, they have added their work to the collection. After filming people, dummies and other objects in front of a green screen, for example, the company’s engineers can rotate thousands of images relatively quickly so that they are added to the data collection. Once the algorithm is trained, you can rotate images without the help of the green screen. The technology still has flaws and in some cases, designers must still make adjustments to automated work, but it is improving.
PS Before finishing, We at Zazz just think about who knows what the next 100 years of AI movies can hold for us, but I am sure of one thing: it will be very entertaining.