Saturday, December 3, 2011

Digital face transplant for low-budget movies

Big-budget special effects could soon be within the grasp of low-budget film-makers thanks to a new technique for automatically replacing one actor's face with another's.

Face replacement is a much-used effect in Hollywood productions, even cropping up in realistic drama films such as The Social Network, in which two unrelated actors played a pair of twins. The complex and expensive equipment it requires, not to mention dedicated visual effects artists, have kept it out of low-budget movies, though.

No longer. "We achieve high-quality results with just a single camera and simple lighting set-up," says Kevin Dale, a computer scientist at Harvard University who came up with the new technique.

Dale and colleagues start a face "transplant" with an algorithm that creates 3D models of each face. Their system then automatically morphs the image of the donor's face to match the recipient, but that alone doesn't create a realistic-looking video ? a joining seam is visible. "One frame might look good, or many frames in sequence might look good individually, but when you play them together you get flickering," explains Dale.

So his system calculates the position on both actors' faces where the seam will be as unobtrusive as possible. It also ensures that it doesn't jump around from frame to frame to avoid flickering. The whole process takes about 20 minutes to produce a 10-second video on an ordinary desktop computer and requires only a little manual interaction. Users can place facial markers on the first frame to generate the 3D facial model, but in some cases the system "worked right out of the box", says Dale.

Hybrid performances

In addition to performing face transplants, Dale says directors could also use his system to blend multiple versions of an actor's performance into a single scene. It can combine the mouth from one take with the eyes from another, for example, because it can match slight differences in movement between two videos. It can't handle every situation, however, and videos with complex or very different lighting won't match up well.

"It's a step towards making this more automatic," says Paul Debevec, a computer graphics researcher at the University of Southern California in Los Angeles, whose work has been used in films such as The Matrix and Avatar. He says that Dale's technique is unlikely to be used by film industry professionals, who can already achieve the same effect, but it could be made into a YouTube plug-in or a similar easy-to-use tool. He warns that people may struggle to match lighting between videos made at home, though.

Journal reference: ACM Transactions on Graphics, DOI: 10.1145/2070781.2024164

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

Have your say

Only subscribers may leave comments on this article. Please log in.

Only personal subscribers may leave comments on this article

Subscribe now to comment.

All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.

If you are having a technical problem posting a comment, please contact technical support.

Source: http://feeds.newscientist.com/c/749/f/10897/s/1a9c14b4/l/0L0Snewscientist0N0Carticle0Cdn212380Edigital0Eface0Etransplant0Efor0Elowbudget0Emovies0Bhtml0DDCMP0FOTC0Erss0Gnsref0Fonline0Enews/story01.htm

justin bieber paternity denver news kym johnson how old is justin bieber how old is justin bieber north dakota jobs referendum

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.