Reality Computing is back! Immerse yourself in this growing ecosystem of converging 3D technologies
REAL is a unique opportunity for companies, individuals and other entities to understand that there is no way to navigate the convergence of 3D technologies alone.
Competition does not exist here. 3D tech is converging and REAL offers a temporary autonomous zone for all to understand that in the near future, capture, compute and create technologies will merge closer and closer together as they bounce off of the internet and are amplified by absurd levels of computing power.
Your specialization is valuable, but if you do not keep an eye on this emerging convergence, you will soon be lost. This is an industry-independent event for a reason.
REAL is hundreds of speakers, attendees and exhibitors who understand that the concept of Reality Computing, though difficult to understand, is an abstraction worth exploring and concretizing together.
These are thought leaders from various industries in our growing 3D technology ecosystem who will present projects and ideas that encapsulate the extraordinary and pioneering work that is setting the standard for the immediate present as well as the near and far future.
The world of 3D technology is changing rapidly. There are new projects springing up every day. With the advent of crowdfunding, talented startups are moving fast and disrupting traditional processes by the dozens every single day.
There is one way forward, to throw off the traditional business models that have influenced global industry and come to an understanding: Collaboration is not only key, it is essential. Experience, explore and learn without compromising integrity in the temporary autonomous zone that is REAL.
Congratulations to the 4 qualifying startups! To see who made it through the first round click below. Thanks to everyone who submitted!See who qualified
Thought leaders from across the globe showcase projects and projections which illustrate a new world in the making. Artists, architects, engineers, scientists, explorers and cultural visionaries examine the achievements and obstacles that are both enabling and preventing the convergence of 3D technologies and their applications. Creativity, passion and inspiration make up the universal language of these phenomenal and inviting talks.
Start preparing for your REAL 2016 experience NOW. Across two Pavilions over two days - there's much to choose from.See Programs
A selection of confirmed REAL 2016 speakers with many more to follow.
Adam Ellison is an Australian entrepreneur, engineer and mechanism designer. In his 12 years of experience, Adam has worked at both large corporations as well as smaller consultancies in mechanical design and engineering.
Adam studied Mechanical Engineering at Curtin University in Perth, Australia, where he first met Daniel, now his cofounder at Modbot. Specialising in Automotive Engineering, Adam moved to Melbourne to work at General Motors, where he designed parts of the Chevy Caprice Police vehicle, the Camaro (of transformers’ Bumblebee fame), and some local Holden models. Later he designed automation solutions for high volume food and beverage clients such as Coke and Pepsi. Eventually frustrated by the limited scope of his design role, Adam returned to university, graduating with a bachelor’s degree in Commerce, majoring in Innovation and Entrepreneurship.
With his interest in entrepreneurship, Adam, for several years, ran Stokes Rousseau, a construction company, in addition to an engineering consultancy and a digital media startup. He was able to not only utilize his technical background but also deep dive into learning the mechanics of running a business. After building up a diverse range of business skills and an extensive amount of mechanical experience, he decided to return to his technical roots and launch a robotics company with Daniel Pizzata. In January 2014, Modbot was born.
Syd Mead Sydney Jay Mead was born in St. Paul Minnesota, July 18th, 1933 but spent only a few years there before moving to what would be the second of many homes throughout the western United States prior to graduating from High School in Colorado Springs, Colorado in 1951. After serving a three year enlistment in the U.S. Army, Syd Mead continued on to the Art Center School in Los Angeles, where he graduated with great distinction in June of 1959.
He was immediately recruited by the Ford Motor Company’s Advanced Styling Studio under the management of Elwood Engle which he left after 2 years in order to accept a variety of assignments travelling the world to illustrate books, catalogues and product designs for large corporate entities including United States Steel, Celanese, Allis Chalmers and Phillips Electronics.
In the 70’s and 80’s he was recruited by several firms to create interior and exterior architectural renderings for such clients as Intercontinental Hotels, 3D International, Harwood Taylor & Associates, Don Ghia, and Gresham & Smith. In 2010, the New York firm of Philip Koether Architects tapped Syd to design the interior of the popular restaurant Bar Basque in Manhattan.
In 1979, the motion picture industry recognized his unique ability to “visualize” the future, His work can be seen in feature films as Star Trek: The Motion Picture, Blade Runner, TRON, 2010, Short Circuit, Aliens, Time Cop, Johnny Mnemonic, “MP-3” and most recently, “Tomorrowland” for which he did pre-production art for director Brad Bird.
Syd has also done concept and design work for a number of major Japanese clients, including; Sony, Minolta, Dentsu, Dyflex, Tiger, Seibu, Mitsukoshi, Bandai, NHK and Honda as well as contributing to two Japanese film projects, He also created designs for two Japanese toy icons, “The New Yamato” and all eight robot characters in the new Turn-A Gundam mobile suite series which are also seen as characters in Television shows.
With transportation design as his first love, Syd seldom misses an opportunity to provide his unique blend of futurism and believability to those projects consisting of a vehicle that travels from “A” to ‘B”. Whether it be designing solar powered unicycles, show cars, luxury yachts, interiors of private 747’s, or interplanetary vehicles, each receives the same attention to detail and is part of a perfectly designed scenario. This scenario approach has become a Syd Mead trademark and can be seen in everything from concept cars for Ford Motor Company to futuristic “Hypervans”. This has earned him the title of “Visual Futurist” by his peers and the industry.
Syd Mead continues an active schedule of one-man shows. Beginning in Germany in 1973, his work has since been exhibited in Japan, Italy, California, and Spain. In 1983 he was the guest speaker to the design staff at Chrysler Corporation. The resulting presentation was a resounding success and has since been expanded and enhanced with computer generated imagery specifically assembled at the requests of such clients as Disney, Carnegie Mellon University, Purdue, Pratt University, the Society of Illustrators. In March of 2010, Syd completed a four-city tour in Australia to capacity audiences at each venue.
His one man shows, “Cavalcade to the Crimson Castle” and “Progressions” have drawn record crowds in venues across the nation. Syd Mead has published 10 books and a set of instructional DVD’s “Techniques of Syd Mead” which continues to be sought after by designers around the world.
Mead attributes success in an astonishing range of creative activities to the premise that imagination…the idea, supersedes technique. “There are more people in the world who make things than there are people who think of things to make.” – Syd Mead
Josh is founder and CEO of Smartvid.io, Inc., an intelligent cloud platform for industrial video and photos. Prior to Smartvid.io, he was founder or management team member of two fast growth venture capital backed software companies (Emptoris and Vela Systems) that revolutionized their respective markets before being acquired. At Emptoris, Josh was responsible for product management and strategy as the web-based strategic sourcing company grew from 20 to 200+ employees and from 1 to 100+ customers, including Motorola, GlaxoSmithKline, Bank of America, and American Express. In 2005, Kanner co-founded Vela Systems and led the company’s product, marketing and business development functions until Autodesk acquired them in 2012. Under Kanner’s leadership, Vela pioneered the use of web-based and mobile workflows to drive efficiency across construction and facilities operations. At time of acquisition, Vela Systems software was in use on thousands of projects spanning the globe and in a variety of industry segments from hospitals to stadiums to oil and gas facilities. Kanner graduated from Brown University with a BS with Honors in Chemistry and Biology, and earned an MBA from MIT’s Sloan School of Management. He still gets excited to put on a hard hat and walk a job.
Charles is leading BONE Structure’s expansion efforts in California and is on a constant quest for innovations that can be applied to the lagging construction industry.
Prior to joining the family business full time, he completed a Masters in Civil & Environmental Engineering from Stanford, and a BS in Structural Engineering from McGill University in Canada. During his graduate degree, Charles worked as a teaching and research assistant in Virtual Design and Construction and co-organized the Industrialized Construction initiative at Stanford to encourage knowledge exchange between students with some of the world’s most prominent companies and leading researchers of innovative construction methods.
About BONE Structure
BONE Structure® has developed and patented a technology that allows to design, manufacture and assemble residential and light commercial buildings in days with zero waste, efficiently and without compromising architectural creativity. BONE Structure buildings are healthy, energy efficient and adaptable. Since 2005, BONE Structure has partnered with architects and builders to complete over 200 custom projects across North America using its proprietary technology.
Behnaz Farahi is an architect, and interaction designer, exploring the potential of interactive environments and their relationship to the human body. In particular she is interested in the implementation of emerging technologies in contemporary art/architecture practice. Her goal is to enhance the relationship between human beings and the built environment by implementing design/motion principles inspired by natural systems. Application areas include architecture, fashion and interaction design. She also specializes in additive manufacturing and robotic technologies.
Her work has been exhibited internationally in USA, Canada and China, and has been featured in several magazines and online websites including The Guardian, BBC, CNN, Motherboard, Dezeen, Frame Magazine, the Creator Project and many more. Behnaz Farahi has won several awards including first prize for the Kinetic Art Organization in 2013, the Madworkshop Grant and the Rock Hudson Fellowship.
Behnaz Farahi has worked with leading firms such as Autodesk, Fuksas Studio, and 3DSystems/ will-i-am. She has also collaborated with Professor Behrokh Khoshnevis on two NASA funded research projects developing a robotic fabrication technology to 3D print structures on the Moon and Mars.
She is currently an Artist in Residence at Autodesk, Pier 9. She also is an Annenberg Fellow and PhD candidate in Interdisciplinary Media Arts and Practice at USC School of Cinematic Arts. She has an Undergraduate and two Masters degrees in Architecture.
Julie Young is a Producer at Emblematic Group, a virtual reality company based in Los Angeles. She has worked on several room-scale VR experiences including Project Syria (Sundance 2015), Kiya (Sundance 2016), Across the Line (Sundance 2016), and Formula-1 (Singapore Grand Prix 2015). Prior to working at Emblematic Group, Julie was an Associate Producer on the 2012 Frontline documentary Fast Times at West Philly High. She has also worked at Goldman Sachs and The Motley Fool. Julie is a graduate of the University of Pennsylvania, where she studied economics at The Wharton School.
As VP of Developer Relations and Business Development at Body Labs, Ali is commercializing the world’s most advanced technology for analyzing human body shape, pose and motion. His mission is to provide a digital human body platform upon and around which goods and services can be designed, manufactured, bought and sold.
He is establishing Body Labs as horizontal platform by partnering with developers to create amazing consumer experiences for retail, fitness, health, etc and designers to leverage the richness of structured 3D data to inform product design starting with apparel and shoes. Prior to joining Body Labs, he was at Intel where he led business development for Intel RealSense technology with 3D body perception as a horizontal strategy and as a vertical solution for virtual try on. Ali is passionate about understanding user pain points and driving technology to delight users through uncommon partnerships.
After a study mechanical engineering, Tim Geurtjens (1976) attended the Design Academy Eindhoven in 2003 where he graduated 4 years later. Since then he has been working as CTO for Joris Laarman Lab, best known for its experimental designs inspired by upcoming technology. Over the last 7 years he developed extensive knowledge of exotic materials and cutting edge technologies, specifically novel digital fabrication methods.
Tim is co-founder and CTO of MX3D, a company that developed several large-scale 3D printers that use 6-axis industrial robots. These printers allow them to print very large structures in many different materials such as thermoset plastics, stainless steel or aluminium. MX3D has recently disclosed their ambitious plan to print a steel bridge over a canal in Amsterdam
Kaitlyn Hova is a professional violinist, full stack web developer, composer, and neuroscientist. Hova is co founder of The Synesthesia Network and The Hovalin, a 3D printable violin. The Hovalin provides maker music programs and educational systems a low cost, high quality instrument aimed at allowing STEM grants to also power music programs
Shashi Jain is an IoT Innovation Manager at Intel Corp, where he uses design thinking to build interesting products around Intel Curie and Edison modules. Shashi is passionate about 3D printing and volunteering. In 2015, Shashi helped Intel volunteers print and assemble 100 3D Printed hand prosthetics forthe e-NABLE community, which distributes them to anyone in need.
Mouse McCoy is a world-class racer, stunt man, and the founder and creative director of Bandito Brothers, an award-winning film studio and production outfit that has cornered the market on the art and technology of motion and mobility. In racing he is a Baja 500 and Baja 1000 champion. The critically acclaimed documentary Dust to Glory captures the struggles and triumphs of his epic solo journey on a motorcycle in the grueling Baja 1000. In the film world he is known for three things: a fiercely independent spirit, real-world innovation, and boldly challenging concepts. The enthusiastic response to his first feature, Act of Valor , and the opportunity to collaborate with Navy SEALs, inspired McCoy to keep refining the authentic approach to action filmmaking his company has been steadfastly developing for the last decade. He has since won a Gold Lion, Clio Gold, and garnered 4 billion media impressions for the Hot Wheels: For Real campaign.
Brad is an HP Distinguish Technologist and the Inventor & Creator of Sprout by hp, the new dual-screen immersive computer that combines natural computing with 2D and 3D scanning, enabling unique Blended Reality experiences that bring out the creativity in everyone.
Brad currently holds the position of Chief Product Experience Architect in the Immersive Computing Group at HP and is responsible for developing future generation computer products, interactive experiences, human-computer interfaces, and the required technologies.
Brad is a 20 year HP veteran, inventor, and intrapreneur with an extensive product portfolio and impressive track record for bringing products to market. During the first 15 years of his HP career Brad worked in the HP Printing & Imaging R&D Lab in San Diego architecting over 18 innovative AiO Printer-Scanner-Copier products to hit the market.
Brad’s experience designing imaging systems and user interfaces inspired his development of a new product form factor to enhance the creative workflow which led to his invention of the interactive projection scanning technology and experience built into Sprout. By integrating camera-based WYSIWYG scanning of both 2D and 3D objects face-up directly into a horizontal projected UI, numerous steps are eliminated when working with digital and physical content, and the creative process is streamlined into a seamless productive workflow without the need to learn complex tools — enabling ideas to go from thought to expression in an instant.
Brad has received over 45 patents (over 20 granted, and over 25 more pending). Brad earned a Master of Science degree in Mechanical Engineering and a Bachelor of Science degree in Applied & Engineering Physics from Cornell University, Ithaca, NY.
Daniel McKinnon is the Enterprise Product Manager at 3DR. He has worked closely with the Autodesk ReCap team to build Site Scan, an intuitive application for autonomous drone inspections, surveys, and scans that seamlessly integrates with the Autodesk software ecosystem. Site Scan enables Autodesk customers to generate aerial imagery, orthophotos, point clouds, or meshes for infrastructure planning, construction monitoring, and myriad other uses.
Daniel received his bachelor’s degree in Physics and doctorate in Chemical Engineering before co-founding Agribotix, an agricultural survey company, after being conscripted by the Denver Zoo to develop a drone for wildlife observation and capture. At Agribotix, he worked extensively with growers and agronomists to develop both a front-end data collection workflow and a back-end data processing solution that quickly and easily gets valuable actionable intelligence into their hands. These efforts have resulted in the establishment of several concrete value propositions for drones in agriculture as well as numerous lessons for what works and what doesn’t in the field.
Founder of several successful software technology companies. Hands-on leadership for technology, financials, team development, customer acquisition and go-to-market. A colleague of mine once said that his work ethic was to “have fun, do good, make money.” That’s good enough for me. Recognized leader in SaaS business models and technologies, software design techniques and Portfolio Management techniques. Other experience include: whole product strategy; software architecture & design; software development life cycle (SDLC); customer life cycle; go to market; project and portfolio management; business planning; fund raising; board management; vendor management; customer management; professional services management; business and financial planning; business writing.
As the director of Long Now, Alexander founded The Interval and has facilitated projects such as the 10,000 Year Clock, The Rosetta Project, Long Bets, Seminars About Long Term Thinking, Long Server and others. Alexander shares several design patents on the 10,000 Year Clock with Danny Hillis, the first prototype of which is in the Science Museum of London, and the monument scale version is now under construction in West Texas. Alexander’s combat robots have won over six world championship titles appearing in the hit ABC TV show BattleBots. He has also built large pyrotechnic displays for the Burning Man festival, robotic bartenders and other dangerous machines. At Carnegie Mellon University Alexander was the lead designer for a record setting human power vehicle team.
As Senior Vice President, Products, Amar Hanspal leads Autodesk’s development of solutions for designers and engineers.
He oversees the company’s cloud platform and full range of products for the Manufacturing and AEC industries. These include offerings for product design, building information modeling simulation, manufacturing, data management and collaboration and IoT. His team also leads user experience design and engineering for all professional products.
Prior to his current role, Hanspal was Senior Vice President, IPG Product Group. He joined Autodesk in 1987 and has served in a variety of roles including VP of Autodesk Collaboration Solutions. Hanspal holds a master’s of science degree in mechanical engineering from the State University of New York at Stony Brook and bachelor’s degree in mechanical engineering from the University of Bombay. He’s passionate about the power design and engineering have to address the world’s biggest concerns: sustainable products and transportation, clean water and energy, and livable cities.
Trae Vassallo is an investor, board member, and entrepreneurial advisor. Her current investments and advisories include eero, Shuddle, Meta and several stealth companies. Trae is also on the board of directors at Telstra, an international telecommunications company. Trae recently co-authored the study “Elephant in the Valley” to help expose the underlying data around the experiences of women in technology. She was most recently a General Partner at Kleiner Perkins Caufield & Byers. Over 10 years at KPCB, Trae invested in number of leading companies including Nest Labs, Dropcam and Opower. A “design thinker” and engineer at heart, Trae thrives at the intersection of great products and big markets.
Before joining KPCB, Trae was a co-founder of Good Technology, a KPCB portfolio company that provides end-to-end wireless email services to the enterprise. Trae began her career at IDEO, where she developed ground-breaking products for companies including Palm and Dell. Trae holds 13 patents across a broad array of technologies and disciplines.
Trae earned a Bachelor’s degree and Master’s degree in mechanical engineering with honors from Stanford University and an M.B.A. from the Stanford Graduate School of Business. Trae is active on the Trust of the Stanford Business School and as an advisor to the Mechanical Engineering Department at Stanford. In addition to working with the world’s best entrepreneurs, Trae is passionate about empowering women in technology and adventuring with her husband and three children.
Jonathan Knowles serves as Autodesk’s Explorer in residence. With a charter of exploring what’s next for Autodesk with a view into the 10 to 100 year forward time frame, he is at the frontier of beyond the horizon technologies, providing insight on the evolving role of design-driven applied innovation, and participating in fieldwork to help solve some of the world’s most complex challenges. He is a creative thinker who researches the intersection of technology and global trends, bringing the arts and science together, while connecting people and ideas. A respected voice on future trends, long-range vision, and creating the preferred future, Jonathan collaborates with engineers, scientists, designers, educators, and others seeking to imagine, design, and create a better world.
Jeff Powers is co-founder and CEO of Occipital, a Boulder and San Francisco based company focused on mobile computer vision. With co-founder Vikas Reddy, Jeff led the development of RedLaser, a popular barcode-scanning application that was acquired by eBay. Remaining independent, Occipital launched 360 Panorama later, which became the best-selling panorama capture application on iOS. Most recently, the Occipital team has launched the Structure Sensor, the first 3D sensor for mobile devices. Jeff is also an angel investor in tech startups, including the mobile robotics company Orbotix and the seed-stage accelerator TechStars Boulder.
Eyal Gever is a cutting-edge digital artist whose work sits at the fusion of art and technology.
Using just a palette of code, he develops life-like digital simulations of moments in time — often dramatic or catastrophic in nature — from which he fabricates 3D-printed sculptures and installations.
Few artists possess Gever’s deep knowledge and passion for all things digital. The advantage, he says, is that he’s not overwhelmed by the power of technology for its own sake. “I’ve simply used the latest technology to develop a new language for my art.”
And Gever’s artistic language is constantly evolving. In 2015 he created Water Dancer, a “3D liquid simulation” featuring a dancer whose body appears to be made of water, which disperses as she moves.
In his latest project #Laugh, which was two years in development, Gever is collaborating with Made In Space & Nasa to create a visualization of human laughter, which will be 3D-printed at the International Space Station – and become the first piece of artwork ever to be produced in space.
Gever, born in 1970 in Tel Aviv, Israel, attended Jerusalem’s prestigious Bezalel Academy of Art and Design, but was injured while performing his mandatory national service with the IDF, where he served for two-and-a-half years as a paratrooper (special operations unit) .
Volunteering for the IDF’s specialist computer R&D unit, Mamram, he began to master the computer simulation skills which would later shape and infuse his art.
In 1993, while still a student, he founded Zapa Digital Arts (later renamed Gizmoz), renown for developing one of the first virtual worlds.
The startup created a range of web-based products, primarily in interactive real-time multimedia communication software, as well as 3D design and animations, which enabled customers to communicate visually. (He went on to co-found Daz3D, after the latter company’s merger with Gizmoz in 2009.)
Carlos Bayod Lucini is architect and head of 3D laser scanning at Factum Arte. His work explores the technological processes that define the original qualities of objects, proposing a contemporary approach to the conservation of cultural heritage through high-resolution digitalization and the eventual creation of facsimiles (an exact re-materialization of a work of art). He is currently working on the development of the Lucida 3D Scanner, a system designed by the artist Manuel Franquelo to record the texture of paintings and relief objects. This research covers the development of both hardware and software written specifically for use with sensitive cultural objects.
Increasingly, artists and institutions require a new type of mediator to turn their ideas into physical realities. Based in Madrid and Milan, Factum Arte is dedicated to digital mediation, transformation and the production of works that redefine the relationship between two and three dimensions.
Factum Arte consists of a team of artists, conservators and technicians devoted to masterminding projects and solving problems, activities which have resulted in original solutions to highly specific challenges. The development and application of new technologies are accompanied by the highest levels of craftsmanship at every stage of the process. Factum Arte always works to the standards required by both the commercial and museum communities in which issues of conservation and archival permanence are of paramount importance.
The work carried out by Factum Arte is inherently original and diverse. It ranges from major conservation facsimiles to archival print editions, from the very big to the very small; in fact we will get involved with anything involving a sensitivity to (and practical understanding of) the transformation, mediation and manipulation of information.
Nonny de la Peña is a pioneer in virtual reality, known primarily for her work in immersive journalism. She serves as CEO of Emblematic Group (formerly Virtual Pyedog) and is currently a senior research fellow at the Annenberg School for Communication and Journalism at the University of Southern California.
Fast Company recently listed de la Peña as one of the “13 People Who Made the World More Creative.”
Jesse is an early-stage venture capitalist who leverages his hands-on operating experience and software industry contacts to source and nurture exceptional design and engineering software opportunities. Prior to co-founding Borealis Ventures, Jesse served as an executive in two public application software companies – Softdesk EVP/President and as the first Autodesk VP of AEC.
He graduated with Distinction from the Indiana University School of Public and Environmental Affairs, completed his military service as a member of the White House Communications Agency and regularly serves on boards of NH non-profit organizations.
Aram Goganian is president and owner of Predator Cycling, a custom carbon fiber bicycle manufacturer and component fabrication company. Founded in 2005, Predator has recently expanded beyond hand-built carbon frames into CNC automation and 3D development, revolutionizing small business product development within the cycling industry. Completely self-taught, Aram plans on advancing the carbon fiber industry by combining digital design with applied mechanics to streamline production all while remaining within the confines of custom-built products to fit the needs of professional and recreational athletes alike.
Pyry Matikainen is a teaching fellow in reality computing at Carnegie Mellon university, from where he received his PhD in computer vision. He has worked on live visualization of laser data, augmented reality games, and mixed reality teleoperation interfaces for robots.
Till Wagner is a physical polar scientist. In his research he combines pen-and-paper mathematics with computational modeling and polar field observations to gain better understanding of the physical mechanisms that govern our planet’s high latitudes. His interests range from how glaciers calve icebergs to the response of sea ice to a changing climate.
Vincent Rossi hails from the great state of New Jersey. He has a BFA in sculpture from the University of the Arts in Philadelphia and Graduate level fine art study at Goldsmiths College/ University of London, England. From 2004 to 2011, he worked as a sculptor, model maker and project manager for the Smithsonian’s Office of Exhibit Central and helped produce and manage many Smithsonian exhibits. From 2011 to present Vince works as a 3D Program Officer for the Smithsonian’s Digitization Program Office – building 3D capacity, developing 3D workflows and trying to live life to the fullest.
Adam Metallo received his BS in Psychology from the University of Maryland in 2004 and MFA in painting from the University of Delaware in 2006. Working at the Smithsonian Office of Exhibits Central he developed workflows that integrate 3D technologies with traditional museum model making. In 2010 he began spending most of his time quietly sneaking up on inanimate objects to digitize them in 3D for the Smithsonian Digitization Program Office.
John E. Folan, AIA, LEED AP BD+C John Folan is the T. David Fitz-Gibbon Professor of Architecture, Founder and Director of the Urban Design Build Studio (UDBS) at Carnegie Mellon University, and Directing Partner of PROJECT RE_ in Pittsburgh, Pennsylvania.
John and the UDBS have been working with urban communities in Allegheny County on the development and implementation of catalytic projects that address critical social challenges through participatory design processes. His work in Public Interest Design relies on both traditional and emergent forms of making as a mechanism for creating appropriate, culturally sustainable projects with collaborating resident populations. His work in Pennsylvania builds upon efforts in university affiliated community based design and construction initiated while John was a tenured faculty member at the University of Arizona. In Tucson, Arizona John co-founded, co-directed, and served as an executive board member of the Drachman Design Build Coalition (DDBC); a university affiliated, non-profit, 501(c)3 corporation dedicated to the design and construction of environmentally specific, energy efficient, affordable housing prototypes. Projects with the DDBC implemented in Tucson’s Urban Empowerment Zone designed and constructed by the DDBC under John’s direction were made available to HUD approved families earning below 60% of the median income level in Tucson. John maintains a small private professional practice that focuses on social, economic and environmental issues in architecture and urban design.
Russell Loveridge is Managing Director of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication in Zurich, Switzerland. He studied civil engineering and architecture, completing his Masters at ETH Zurich and obtained his doctorate at the EPF Lausanne. He has worked professionally in construction and architecture, but also has extensive academic and research experience working in the public sector, for private industry, and was previously the Research Director at the Laboratory for Architectural Production at EPFL. His research investigates advanced fabrication methods, smart materials, and novel construction techniques all with a specific interest on how these emerging technologies affect processes of architectural design and construction.
Dr. Sina Shahbazmohamadi is currently an assistant professor in Manhattan College. He received his PhD in Mechanical Engineering in 2013 from University of Connecticut where he studied life prediction of thermal barrier coatings used in Gas-turbine engines. He developed a patented imaging technology for non-destructive evaluation of Turbine blades. He pursued his interests in Imaging and Imaging processing as the director of 3D imaging lab at Center for Hardware Assurance Security and Engineering ( CHASE) where he developed advanced technologies for Counterfeit Detection and Prevention of Electronics. His friendship with Dr. Howe, a medical doctor and a saxophone/Oboe player who is also an antique musical instruments’ collector resulted in developing a technology for bringing the antique musical instruments back to life
Sven is the chief technology officer (CTO) of Khosla Ventures and an engineer at heart. He has worked on numerous technologies ranging from autonomous robots, automotive LED front lighting, user interface and display technologies to RFID systems for production verification. During his tenure at KV, Sven has led the technical diligence for numerous investment opportunities in both IT and sustainability. He is excited by technologies that enable novel user experiences or markedly change the efficiency of operations.
Prior to joining KV, Sven spent six years at Mohr Davidow Ventures, starting as an associate and culminating as the CTO of the firm, where he led the firm-wide technical diligence process for the infrastructure IT and sustainability practices. During this time, Sven worked with technology companies in areas ranging from advertising automation to columnar storage. He also helped portfolio companies recruit technical talent, manage milestones, run product-market fit experiments and craft fundraising strategies.
Previously, Sven was a project manager for the Electronics Research Lab of Volkswagen in Silicon Valley, where he led numerous technical projects for all Volkswagen brands ranging from LED front lighting (Audi), electro chromic glass (Volkswagen), RFID systems for production verification, driver assistance systems and robotics. His duties included leading the technical team, presenting research findings to board-level executives at the company’s headquarters in Germany, crafting intellectual property roadmaps and the adoption strategy of those technologies by different brands. The most visible project was likely his role as the lead engineer and project lead for the Stanford racing team’s autonomous car, Stanley. Stanley became the foundation for the Google self-driving car project and also won the 2005 DARPA Grand Challenge and is now on display at the Smithsonian Museum.
Sven holds a bachelor’s of science degree in mechanical engineering from Purdue University, where he served as system administrator for the mechanical engineering department’s computer network to finance his studies. He also holds a doctor of philosophy degree in mechanics and computation from Stanford University. His Ph.D. research was funded by IBM and Intel and focused on the simulation of failure in semiconductor structures using high-performance computing (HPC) clusters.
In his spare time, Sven enjoys teaching his kids to code, flying autonomous drones and cycling. He also regularly serves as a mentor for Stanford’s E245 “Lean Launchpad” class.
Dave entered the corporate world at Bain and soon joined the startup world as Bonobos’s first hire. Years later, he flew westward to build the next great retail data platform at TellApart. In 2012, Dave founded Floored while an Entrepreneur-in-Residence at Accel Partners. He has a BA from Harvard, which he uses to handle business stuff and fetch things for Floored engineers.
Stuart Woods serves as the vice-president of Leica Geosystems Geospatial Solutions Division. He leads Leica Geosystems’ mobile sensor business. Woods is responsible for the research, development and implementation of hardware and software content collection for multiple verticals. With a passion for value creation, Woods has held various management positions with prominent photonics companies that included Coherent Inc, SPI Lasers, Cisco Systems, Pirelli, Alcoa Fujikura, and Sumitomo Electric. Woods holds a Bachelor of Science degree in Chemistry from North Carolina State University. He has been published in various professional publications and was twice awarded the LaserFocusWorld Excellence in Technical Communications Commendation.
Andrew Hessel is a Distinguished Research Scientist with Autodesk’s Bio/Nano Research Group, which is developing software tools for designing living and nanoscale systems. He’s currently working to “3D print” customized cancer fighting viruses, starting with dog cancers. He is a 2016 AAAS-Lemelson Invention Ambassador, a fellow of the University of Ottawa Institute for Science, Society, and Policy, and past co-chair of the biotechnology and bioinformatics at Singularity University. He has written articles on genetic technology for The Atlantic, Wired UK, and Huffington Post and given over 100 invited talks on next-generation biotechnology.
Carl Bass is president and chief executive officer of Autodesk, a leader in 3D design, engineering, and entertainment software. Bass co-founded Ithaca Software, which was acquired by Autodesk in 1993. Since joining the company, he has held several executive positions including chief technology officer and chief operations officer. Bass serves on the boards of directors of Autodesk, Quirky and E2open; on the board of trustees of the Smithsonian’s Cooper-Hewitt National Design Museum; and on the advisory boards of Cornell Computing and Information Science, UC Berkeley School of Information and UC Berkeley College of Engineering. He holds a bachelor’s degree in mathematics from Cornell University. Bass is a maker and spends his spare time building things—from chairs and tables to boats, and most recently, an electric go-kart.
After earning his master’s degree in Aerospace Engineering, Mr. Napoli has been researching and developing additive manufacturing technologies for use in microgravity and the space environment. He was part of the team that built and operated the 1st off-world 3D printer in 2014 as part of NASA’s 3D Printing in Zero-G Experiment and Project Manager for the Manufacturing Device which will be a permanent additive manufacturing facility beginning operation on the ISS in 2016. Mr. Napoli is currently leading the path forward for in-space manufacturing of items for NASA, commercial, and STEM users.
Made In Space
Founded in 2010 with the goal of enabling humanity’s future in space, Made In Space, Inc. has developed additive manufacturing technology for use in zero-gravity. By constructing hardware that can build what is needed in space, as opposed to launching it from Earth, the company plans to accelerate and broaden space development while also providing unprecedented access for people on Earth to use in-space capabilities.
Made In Space’s team consists of successful entrepreneurs, experienced space experts and key 3D printing developers. With over 30,000+ hours of 3D printing technology testing, and 400+ parabolas of microgravity test flights, Made In Space’s experience and expertise has led to the first 3D printers designed for use on the International Space Station.
Matt Berggren is Director of Autodesk Circuits and a part of Autodesk’s Digital Manufacturing Group, located at Pier 9 in San Francisco. Prior to joining Autodesk, Matt had spent 15 years driving product teams at Altium where he was responsible for Altium’s larger electronic product development methodology. Later, Matt was Sr. Director of Product at Supplyframe, focused largely on vertical search for electronics and operating at the intersection of content, community and capability specific to electronics design.
Hao Li is an assistant professor of Computer Science at USC and CEO of Pinscreen, a startup company focusing on a next gen mobile communication app with AR capabilities. Prior to joining the USC faculty, he was research lead at Industrial Light & Magic/Lucasfilm. While at ILM, Hao not only developed next generation real-time performance capture technologies for virtual production and visual effects, he also began collaborating with Autodesk’s Reality Capture Lab in San Francisco. Since the lab was only a 10 minute walk from work, Hao would stop in for stimulating discussions with the ReCap team who gladly lent him equipment to process Reality Capture data. At REAL 2016, Hao will have no shortage of topics to speak about. From geometry processing, 3D reconstruction, performance capture, and human hair digitization to the use of his algorithms at leading visual effects studios and manufacturers of state-of-the-art radiation therapy systems.
Cosmo Wenman is an artist and freelance designer who’s been experimenting with 3D capture and 3D printing for the last few years. He maintains that 3D printers are actually de-materializers, and they are going to print—and dissolve—some unexpected things. He’s interested in network effects, intellectual property, piracy, pop culture, antiquities, and fine art.
Tristram Carfrae is a leading structural designer. He has been responsible for the structure of a dazzling array of award winning buildings over his 30 years with the firm, both in Australia and the UK. His particular skills relate to the integration of engineering and architecture to provide the best holistic solution. Tristram has been fortunate to work with many of the world’s leading architects on projects where the structure forms a major aesthetic component. In 1999/2000 he was the Design Leader of Arup Associates in London, a firm of 120 architects and engineers.
He has an unswerving commitment to designing better buildings that consume less resource: materials, energy, time and money; yet give more pleasure. He has helped design six structures that have won Special Awards from the Institute of Structural Engineers – the world’s premier structural accolade.
Tristram is Arup’s Deputy Chairman. He is one of 16 Arup Fellows (out of a global staff of 10,000). This accolade honours those who have significantly contributed to the firm’s reputation for excellence in innovation and design and designates him as a leader with the role of ensuring this continues.
In 2001 he was named as Australian Professional Engineer of the Year and was recognised in 2004, 2005 and 2006 as one of Australia’s top 100 most influential engineers by Engineers Australia. In 2006 The Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) awarded him the title of Royal Designer for Industry in recognition of his achievements in engineering design and he was also awarded the Milne Medal by the British Group of the International Association for Bridge and Structural Engineering (IABSE), which recognises Tristram’s work as a designer and for his continuing contribution to design excellence. In 2008 Tristram received the Hero of Innovation Award from the Warren Centre. In 2009 he received the MacRobert Award, the highest engineering recognition available for a project, and was also named as one of twenty ‘Heroes of the Green Age’ in a publication printed for the Copenhagen 15 Summit. Only this year, he was awarded the prestigious Gold Medal award by the Institution of Structural Engineers (IStructE). The Award recognises Tristram’s exceptional contribution to the science and structural engineering professions.
Frank cofounded BioDigital in 2003 on the premise that advancements in 3D and information technology will transform the way we understand health and medicine. With a mission to improve health literacy, BioDigital looks to empower patients and professionals with engaging, understandable health information using the first ever 3D virtual body platform, the BioDigital Human. With a background in engineering and health informatics, Frank uses this vision to drive product innovation. Frank has his graduate degree in engineering from Columbia and did his undergraduate work at the University of Michigan.
Jed Iverson is a licensed Structural Engineer as well as the BIM Delivery Group Lead at MWH Global. With a combined 18 years of CADD/BIM and Engineering experience, he has focused not only on engineering design but on how to use photogrammetry models through the complete design phase of a project. By capturing near real time information, accurate and reliable models can ensure early collaboration and prevent unnecessary re-work throughout the design life cycle.
Brian Mullins has consistently been on the forefront of technology and is a pioneer in the field of augmented reality. His experience with Human Machine Interface and Computer Vision technologies enabled him to found DAQRI and develop its proprietary 4D technologies for the enterprise.
Prior to founding DAQRI, Brian transitioned to industrial robotics from the field of military command and control systems, after spending three years as a consultant to the Space and Naval Warfare Systems Command in San Diego.
Earlier in his career, Brian worked as an engineer at the Computer-Aided Operational Research Facility operated by the U.S. Department of Transportation, supporting simulation and network technologies and working with some of the pioneers of those fields. Throughout his career, Brian developed and implemented systems that employed some of the earliest vision-based mixed reality technologies.
Brian holds a Bachelor of Science from the United States Merchant Marine Academy, where he studied electrical and mechanical engineering.
Nathan King is an Assistant Professor of Architecture at the School of Architecture + Design at Virginia Tech and has taught at the Harvard University Graduate School of Design (GSD), The Rhode Island School of Design (RISD), and the University of Innsbruck. With a background in Studio Arts and Art History, Nathan holds Masters Degrees in Industrial Design and Architecture. He earned the degree Doctor of Design from the Harvard GSD where he was a founding member of the Design Robotics Group with a focus on computational workflows and Additive Manufacturing and Automation in Architecture, Engineering, and Construction industries. Beyond academia, King is the Director of Research at MASS Design Group, where he collaborates on the development and deployment of innovative building technologies, medical devices, and evaluation methods for global application in resource-limited settings. As BUILD Programs Manger for the emerging Autodesk BUILD Space—BUILD for Building, Innovation, Learning and Design—and through his company Front Porch Automation, Nathan leads the development of research facilities, programs, and software to support the exploration of emerging opportunities surrounding technological innovation in art, architecture, design, and education
Rael Romero is the Research Innovation Manager at MWH Global. A multimedia developer with 16 years of professional experience in visualization and technical development, he focusses on technical, production, and strategic development to implement multiple types of media/technologies for engineering design. Working within BIM, he integrates photogrammetry to capture real-world objects to develop BIM models for optioneering.
Alexx Henry is an award-winning director, photographer and 3D capture evangelist. He is the designer of the xxArray™ 3D capture system, and CEO of BlueVishnu. While his roots lie in celebrity and entertainment photography, his passion lies in capturing everyday people in 3D. Henry visualizes a world where people are using 3D avatars to make their lives better.
Recognized by The Huffington Post as “fearless in his vision,” Henry has been working with technology and visual storytelling to evolve past his photographic roots. His studio has been recognized with numerous awards including a Cannes Cyber Gold Lion for interactive, Digital Magazine studio of the year and a Unite Best in Class award Unity 3D.
A marine scientist turned designer and serial entrepreneur, Sly incorporated his passion for technology, photography, and design to develop a method to 3D capture coral reefs underwater. His goal is to utilize technology and design thinking to create a space for conversation about sustainability issues surrounding oceans and climate change. He is a Singularity University GSP 2015 alumni, 2015 Fast Company Innovation by Design Award nominee, World Economic Forum Global Shaper, and co-founder of a new haptic communication company, Emerge. His other work can be found at www.slylee.com
Ray Blotteaux is the owner and President of Transition Composites Engineering (TCE). Founded in 2007, TCE is a veteran owned small business located in Ferndale, WA.
Mr. Blotteaux has nearly 30 years of composite product development and manufacturing experience in the US, Mexico, Asia, Canada, and Europe.
Ray has a unique combination of analytical and hand’s on skills. He is intimate with all of the steps required to take a new idea for a composite product from concept to mass production. From a single prototype, to factories fabricating 30,000 pieces per month, Mr. Blotteaux’s experience runs deep.
Ray has product and process development experience in the following industries:
Mr. Blotteaux has been awarded several US and international patents. He has been an adjunct faculty member and research advisor at Western Washington University since 2005.
Ray holds a MS in Mechanical Engineering from Utah State University in 1992 and a BS in Aerospace Engineering from the University of Minnesota in 1985.
Ray spent 8-years in the US Air Force as an aerospace engineer with experience in structural design, process development, flight test, and battle damage repair. He served in Operation Desert Shield/Storm.
Product Artist Lionel T Dean has been exploring the creative potential of digital design and manufacturing technologies for over a decade and is at the forefront of 3d printing in art and design. In 2002 he founded FutureFactories, a studio focused exclusively on 3D printing. Dean uses the virtual realm as a creative playground where designs exist as living entities with past and future states. 3D printing and reverse engineering allow transition from virtual to real and back again. Real world artefacts are ‘printed’ from time-based virtual ‘performances’. Families of unique configurations are generated from meta-design templates that exist in a constant state of flux. The artefacts are familiar yet strange, exhibiting some form of structure but defying the patterns and logic of the man-made world; the flora and forna of an alien landscape
Designer Francis Bitonti is ushering in a new manufacturing paradigm through his sui generis blend of computational design techniques and emerging manufacturing technologies. Blurring the lines between fashion and technology, Bitonti’s trademark process merges cutting edge digital design and manufacturing technologies, aimed to transform mass production.
Looking to the future of design, mass customization, Bitonti sees computational methodologies, smart materials, and interactive environments as opportunities to create new aesthetic languages for our built environment. He is taking that vision and applying it to his self-named luxury goods label, Francis Bitonti, launching in September 2014. “My design process is a collaboration with artificial intelligence,” Bitonti explains. “We’re transposing these ideas from design methodologies to tangible consumer experiences.”
Francis Bitonti’s work has been published internationally in many prestigious institutions including the Smithsonian Cooper-Hewitt National Design Museum and most recently has garnered media coverage for the 3D printed gown created for fashion icon Dita von Teese, which received numerous accolades and a great deal of public attention when it was debuted at Ace Hotel in New York City in 2013.
Francis Bitonti currently lives in New York where he runs his design practice.
Michael started in 3D in 1985 as a metrology engineer responsible for solving aerospace manufacturing problems. He helped develop and became the first user of a revolutionary new portable 3D tool called the FaroArm. In 1995 Michael left aerospace to start Direct Dimensions as a “one-stop shop” for 3D scanning, imaging, and metrology services. For over 20 years, Direct Dimensions has expanded significantly and provides some of the most comprehensive arrays of 3D scanning and modeling solutions for the widest range of applications and industries. Today Direct Dimensions employs over 25 technical and professional staff and has performed 1000’s of amazing 3D scanning across all industries from aerospace, automotive, art and architecture to museum, military, medical and movies. Michael graduated from Virginia Tech with BS degree in Engineering Science and Mechanics in 1985, followed by a Masters of Engineering Administration from George Washington University in 1990.
Bastian Schaefer, born in 1980, is a maverick, kitesurfer, TED speaker, father of a boy and a girl and beside all of this automotive engineer. At Airbus he was involved in the design of In-Flight Entertainment and Cabin components for all aircraft programs. In 2009 he joined the Cabin Innovation department and was member of the project team who created the Airbus Concept Cabin with its mind-blowing bionic structure and transparent skin. Bastian is part of the “Airbus Bionic Network” and project leader of the Bionic Partition project, a joint-innovation project with Autodesk, TheLiving and APWorks, which was unveiled at Autodesk University 2015 in Las Vegas.
Tatjana Dzambazova is a trained architect with twelve years of experience in architecture and design in Vienna and London. Perennially fascinated by how technology enables a more creative life, she decided to continue her career in the digital design world and has now been with Autodesk for fourteen years.
A staunch believer that we are all born with a sense of creativity and innovation and the need to express it, Tatjana has been telling stories about technologies that empower while at the same time leading product development teams to transform powerful algorithms into tools accessible by a wide range of people. Most recently, she has been leading the development of Autodesk Memento to empower artists, scientists, curators, media professionals, and others to tell their stories though the digitization of captured reality.
As Business Line Director, Elmer Bol is responsible for bringing to market and driving adoption of two emerging products (Autodesk Memento). He oversees and manages an entrepreneurial team of product/program managers and engineers to define appropriate business models, product architecture and go-to-market strategies while Building key industry partnerships.
Bol has held other positions at Autodesk in Product Management and Market Development. Prior to joining Autodesk, he was co-founder and CEO of Alice Labs B.V and held several positions with IBM. Bol holds an master’s degree in business administration from Webster University and a bachelor’s degree in chemical engineering from The Hague University in the Netherlands.
Damon works in many areas of the Web3D Industry and has been active in interactive 3D development and the 3D web for over a decade. He is currently working with IDEAbuilder, a building company that uses mixed reality technologies, digital fabrication, and advanced manufacturing to build homes more efficiently. Damon is actively involved with the 3D web’s convergence with other technologies including GIS, CAD/BIM/CAM, AR, and IoT. Through the A.E.C. Hackathon, he is actively working with a growing community that look to use innovative technologies to solve various industry problems
Terry D. Bennett, LS LPF MRICS ENV SP LEED AP, is the senior industry strategist for civil infrastructure at Autodesk. He is responsible for setting the company’s future vision and strategy for technology serving the planning, surveying, civil engineering and heavy construction industries, as well as cultivating and sustaining the firm’s relationships with strategic industry leaders and associations.
Bennett has been a practicing professional, global author and lecturer for over 30 years focusing on the impact of technology and sustainable approaches to urban planning, civil infrastructure and cities. He holds multi-state licensing in both Land Surveying and Natural Resource Forest Management, is a Chartered Land Surveyor with the Royal Institution of Chartered Surveyors, a Credentialed Envision™ Sustainability Professional with the Institute for Sustainable Infrastructure and a LEED Accredited Sustainable Design Professional.
He currently serves as a Board Member of Harvard’s Graduate School of Design’s Sustainable Infrastructure Advisory Board, is an editorial board member of Informed Infrastructure Magazine, a charter member of the Institute of Sustainable Infrastructure and a member of its technical committee and a council member on the Urban Land Institute’s Public Development and Infrastructure Council.
You can find his publications and projects on his LinkedIn page at https://www.linkedin.com/in/terrydbennett
Tod Stephens, President of NexGenViz, is an electrical engineer by trade and has been specializing in 3D computer modeling, simulation and animation for over 20 years. He started his career programming robotic equipment for Ford Motor Company and has worked as a Software Applications engineer for several Autodesk resellers and for his own company NexGenViz, which he founded in 2005. Recent projects include modeling the neuromuscular junction of the human body and being part of a team developing a physics-based chewing simulation for a dental software application that was awarded a US Patent. His role in that project involved programming a virtual environment to analyze the forces involved in chewing based on CBCT and laser scans of patient bone geometry. This merging of real world human geometry and virtual environments led to his current project, which involves developing a modeling, simulation and fabrication workflow for light-weight armor design for police and security personnel. Tod was one of eight individuals chosen to present a Technology Proposal on the armor design at the SOFIC conference in Tampa, FL. The presentation was given before the TALOS team, who are building a real-world Iron Man armor for the US military. Tod is looking forward to sharing his progress on his armor design and reaching out to engineers and designers at the REAL conference to foster multi-discipline collaboration. Tod is also co-owner of a small 3D printing company, NexGen3D, which designs and makes products for hobbyists, artists and garage inventors.
Aaron has over 19 years of experience in mechanical engineering; honing his abilities during his tenure as chief product development engineer at Porex Surgical (former manufacturer of MEDPOR®) from 2002-2008. Following his time at Porex, Aaron and his wife Golda have built a leader in design and machining services in the south western Atlanta area. In 2011, Aaron founded Poriferous LLC, using his previous experiences at Porex and GT Virtual Concepts to create a medical device company with a focus on innovations in implant technology.
Goksel is the Founder and Lab Director of PercepTonic, his Dallas-based startup where he formulates embedded vision solutions for next-gen products. Thanks to the Robotics Institute at Carnegie Mellon and seven years with Texas Instruments’ R&D labs, Goksel is a recognized expert in Computer Vision with patents and product contributions in automotive safety, video analytics, face and gesture recognition, and augmented reality. Goksel has been working with autonomous systems including drones for over 20 years. When he’ s not busy prototyping, Goksel is presenting at an industry event or organizing an IEEE workshop. His end-to-end expertise in HW and SW make him a most sought-after advisor on “All Things Vision.”
An in-depth follow-up with speakers from REAL 2015, and a preview of what's to come at REAL 2016.
Description: William Trossell is co-founder of ScanLAB Projects, a creative studio specializing in large scale creative and architectural 3D scanning.
A: What were your impressions about REAL 2015? What was one of your top learning experiences?
W: The people there were amazing, but one highlight was talking to Alexander Rose from the Long Now Foundation. We struck up quite an interesting conversation by the bay around this concept of bit rot — this sort of trendy term which has been floating around quite a lot. And I said quite abruptly, “You know, video has gone through 60 hardware formats in the last 20 or so years, and it’s been around for a lot longer than laser scanning. I wonder how many file formats we’re going to go through in the next 20 years, or for however long laser scanning will continue to be around.” It was an interesting conversation on how to store this data forever and what that means for the way we collect and store it, what it means for museums and heritage, and so on. Everything that we investigate about the past is written on a physical object like a paper, or a tablet, or etched in stone somewhere. Thinking to the future, should some sort of international community be responsible for capturing data and keeping a public archive of the data? The conversation was about all of these kinds of open-ended questions which I really enjoyed.
A: I think the way the world is connected and changing right now, there is a sense of urgency around creating digital copies of the physical artifacts, to preserve them from damage by natural phenomenon or being destroyed by humans.
W: Exactly right. The counter argument, which came up while we were working in the Arctic was that once these artifacts are recorded, is it okay for them to be destroyed, does it take the pressure off preserving them? Part of me that has working in a manufacturing setting for a year or two of my life feels that it’s horrendous that everything gets digitized and it’s not okay for the physical artifactto be destroyed. Maybe our efforts should be in preserving the physical and not worrying about the digital. All of these kinds of arguments are suddenly moving quickly into the foreground.
A: My take on it is that digital copies should act as a backup for physical objects, just in case something happens where they are not able to be preserved. The exact same way you make digital copies of digital files and back them up somewhere, we can now make digital copies of physical objects, but it doesn’t mean we should abandon attempts to preserve the original physical object.
W: There are conversation with groups about what’s happening with groups like ISIS in the Middle East for the next 15 years. Right now, it’s too hot to go into Syria to preserve these artifacts, but planning ahead of ISIS needs to happen. The discussions around the urgency of historical preservation are very exciting, daunting and challenging with respect to the effects of perpetual and destructive conflicts in the Middle East.
This was first mentioned when we were capturing sea ice with Greenpeace in the arctic with Cambridge. Greenpeace were a little worried that once the scans of the sea ice were turned into an exhibition, the public would relax about the issue, thinking, “Oh we have it, we can just have a robot remake it.”
This reminds me of a company called Factum Arte that we’ve worked with on numerous important projects. Two years ago they opened a facsimile of Tutankhamen’s tomb in Egypt. To many, it’s the best replica of any historical site ever made. They used all of the latest scanning technologies to capture a huge amount of physical data, and used CNC machines and custom-made printers to create a physical replication of Tutankhamen’s tomb that was as close as possible to the original. Then the replicated site was opened to the public. Showing these fakes is as close as a visitor can get to the real thing — but in terms of preservation — this facsimile actually helps reduce the amount of damage done by members of the public who would otherwise be walking through Tutankhamen’s real tomb.
A: When did you begin to experiment with Reality Capture and how does ScanLAB differ in its approach to laser scanning?
W: A lot of people would traditionally come at laser scanning with more of an engineering or surveyor’s eye, whereas we see it as a creative medium. It’s certainly a different way of visualizing the world. It’s an evolution of the camera and the photograph, and there’s a lot of interesting conceptual positioning of laser scanning within the realm of photography.
I actually used to have a job at a fabrication warehouse in East London where I was making giant art pieces for emerging artists such as Matt Collishaw, Alex Holder and Conrad Shawcross. Making these pieces required me to refine my architecture degree to include robotic fabrication, CNC, and 3D printing. Scanning came along as a beautiful tool which created a symbiotic relationship between physical reality and the computers on which we were designing. This ability to digitize surroundings quickly became an invaluable tool to us, and it really created new avenues for design and experimentation.
My thesis for example — the construction of a rainbow machine — was pretty bonkers. I was placing these water tanks within a forest clearing, and the tanks were equipped to sense when someone walked into this particular forest clearing. The tanks would also sense and calculate the position of the person relative to the position of the sun. Next, when the sun was at the correct angle in the sky relative to the person, a mist would automatically be set off and that person would be able to see a beautiful rainbow. Laser scanning was a pivotal tool in planning this one or two hundred year event (laughs), you know, that someone would actually come walking through this exact glade and see a machine produced rainbow.
What allowed us to do this was that our university was lent an FARO 120. This was a big beast of a scanner. It weighed probably about 15 kilos and cost around $120,000. But it was an extremely high resolution scanner and it was fixed onto a tripod station, just like a surveyor’s station. The FARO 120 sent out and infrared pulse 300 meters out into the clearing. We just had to work out how to balance a person’s position, the angle of the sun and the expulsion of the mist. This meant we had to put some rather awkward parameters into the project. For example, all of the water had to be collected by rain, and the power for this instrument was produced by wax pistons expanding and contracting with the heat of the day and the night which would pressurize the water tank. When the scanner sensed a person, it would burst to life, produce a mist, which would then refract the sunlight and create a rainbow. I didn’t actually realize the whole thing as a working structure, but I did have all of the elements. I just ran out of time, plain and simple. But I built all of the tanks, proved that the rainbow machine worked, and proved that the wax pistons would be able to compress the air and so on. We did sent some people through the clearing and it was a bit of a shock to them because the rainbow machine comes to life with a loud bang and hissing noise (laughs). So, it wasn’t very subtle.
The next thing we needed to get our hands on was basically something good to scan. Two things came up quite quickly: it was the end of the year, and all of the students put up their work in an exhibition. Each student had a very small space for a very limited time. I think each student had about 10 days to have their exhibition up. It would have taken us 10 days just to set our exhibition up, so we thought it would be a good idea to scan the exhibition itself and archive it indefinitely for the school and all of the parents, as well as anyone else who would be interested in seeing all of the students work. It dawned on us that this scanner was not just a tool for measurement. Its resolution was so high, that it had an uncanny ability to forensically record specs. We realized we could capture and reconstruct this exhibition for all of these students, as the police do when they reconstruct a crime during an investigation.
A: What does it take to reconstruct and send scan data into another application like a hologram or alternate 3D format other than a point cloud?
W: I am waiting, I really want a hologram table. I think I’ll be one of the first people in the cue for one of those (laughs). Umm, and it’s a particular frustration for us because the point cloud is such a beautiful, ephemeral, intangible and illusive beast which is consumed through a 2D perception on a flat screen, whether it be a mobile device, laptop, or whatever. It’s interesting to see emerging technologies like Google Cardboard, or Oculus Rift which offer this ability to transpose yourself and immerse yourself in this digital world.
A: Have you ever experimented with transforming point cloud data into a VR or AR experience?
W: We did a show in New York which is on our website. We went to the Brazilian rainforest, and recorded some locations deep within the jungle. And then we took those point clouds and we put them in an Oculus Rift gaming engine — basically a unity engine. We did a bit of custom coding to get that to work, and put it into a gallery space at the New Museum for three months. It’s now in Oslo and Berlin at the moment. Typically, with the Mark I and Mark II versions, you can sit there and rotate your head, but that was it. We thought, ok, this is interesting, but it’s not that great. So we took it a step further by placing a tracking system on the headset and cameras in the room. This would allow you to literally walk through the point cloud. You could use your body to navigate back and forth around the cloud, and inspect this tree that was in the middle of this captured Brazilian forest scene. It was quite an expensive thing to put OptiTrack together with Oculus and get that all working, but we were very pleased that users could walk around in this VR space instead of having to stay in a fixed position where you just look around from a single point of view.
We had to make loads of decisions to make this happen. First of all, we aren’t a software company (laughs). So the engine we built is good in that it can handle 40 million points. As a measure of how much information that is, when the scanner is capturing at its full capacity, it can capture 750 million points per scan. We regularly deal with point sets that are around 5 billion points, and we can just about visualize it through a few sneaky tricks. But there’s no way you could wander around in it. That’s just too much information. Maybe if NVIDIA and some very clever coders came down, and Autodesk got involved, we could probably do something really cool, but we had to go with the 40 million points and choose the right number of scans, which was about four. We had to then optimize it to make it appear to the user as a solid point cloud, and steer away from trying to make a seamless, solid type of vision.
A: I’m sure you’ve been extremely busy, but what is one problem you’ve been working on at 3D Robotics since REAL 2015?
C: What’s still an unsolved problem for us is how do you make the scanning process easier? How do you push a button and get a great scan? At REAL 2015, I showed some of our earlier work—the one-button building scanner, one button infrastructure scanner — and we’ve taken it further since then. We’ve been working with the ReCap team to combine our API with their API to make the experience (capturing reality data, uploading it to the cloud, rendering the data as a 3D model and mapping it) completely instantaneous and automatic.
A: In your presentation you also mentioned a conversation with Carl Bass where you discussed what whole areas have already been digitized, and you mentioned Google Maps as an example. How do you see Reality Capture by drones fitting into this process of digitizing whole areas?
C: Yeah, this is an ongoing conversation Carl and I have about if you could know the physical world as well as you know the digital world, what would you do with that information? What evidence do we have of early examples of that? When I was a kid, the space age had just begun and satellite technology was still in infancy. But satellites gave us the first glance of the world from above. And Stewart Brand famously called on that to give us a picture of the world in the form of the Whole Earth Catalogue. It was a radical change in perspective that basically spurred a generation to think about the environment and the planet, because we now had the data to be able to see our effects on the environment from a whole earth perspective.
When you think about it, drones are the new satellites. Satellites gave us an overhead view that was extensive, above the clouds, you could go anywhere at any time. That was great, but that was literally a half century ago. Satellites have gotten a lot better but the idea now is that space is always going to be expensive to travel to, and it is always going to be far away. What we realized is that just ten or fifteen years ago, there was a great debate going on with telecommunications—would it be satellite phones or cell phones? On the one hand, satellites are exciting and powerful machines that cover the planet and have amazing capabilities. Sixty-four satellites could give internet and telephone access to the whole world. By contrast, you would have to build a hundred million cell phone towers to get the same coverage.
Why would anyone want to build a hundred million cell phone towers? We know now that this multitude of cell phone towers are in fact a better solution. But why did cell phones beat sat phones? The answer is that they were cheap, you could deploy them where they were needed, and they could scale to high bandwidth requirements. They were scalable, it’s simply that. And drones are scalable the way satellites are not. When you inhabit the notion that drones are cheap and easy and able to access anywhere in the sky, it starts to feel like the internet—you know fast, cheap and out of control (laughs). When you think of it that way, you know, as drones being the new satellites, you think about how influential the satellite was, then you can see why Carl and I have such a vision of the reality capture from here.
A: So what have you been doing with the Reality Capture team since Real 2015?
C: You know, to be honest it is not sexy stuff. It’s just connecting the pieces. It is one thing to say “one button reality capture” and it is another thing to actually deliver. That’s what we have been doing in the background is taking all the little pinpoints out of the workflow: planning the mission, safely executing the mission, capturing the right imagery, transmitting that right imagery automatically to the cloud, and tagging it, so it you get a high probability of a good scan. We’ve also been working on speeding up the modeling, compositing the experience, displaying it, getting it to work over cell phone networks if you don’t have Wi-Fi. All of that little stuff in the background is what we’ve been focusing on.
A: If you were to describe your experience at Real 2015, how would you do so?
C: Well, what I loved about it was how interdisciplinary it was. You just sit there and listen to one person giving a speech on a bridge, the next one would be about a farm, the next would be basic research, someone was speaking about construction… it was amazing. Reality Capture covers so many different industries. They all share this need for measuring real world data, but what they learn from that data and how they change their practices as a result of what they learn from that data is very interesting. Every presenter was different. We’re having this moment—it’s like the internet, or electricity. The internet is not an industry, it transforms all industries. It transformed everything it touched. In this moment, we’re realizing we suddenly have this stick called Reality Capture and we can start hitting things with it. So what happens if I hit farms with Reality Capture, what happens if you hit the seas with Reality Capture? And that is what is so interesting about REAL — one speech would be talking about counting chimpanzee nests, the next would be measuring truck movements at some construction site. So it was great to learn from other industries that are using the same tools that you are, but in different ways.
A: Will you be attending Real 2016?
C: Sure! We have a really close relationship with Autodesk and we share a real vision that is basically interested in the uncharted territory out there which is extending the internet into the real world. You can only manage what you can measure. Autodesk comes from kind of a CAD world which seems very arcane, we come from a drone world, which seems very arcane, but we are both interested in something much bigger which is big data, and the ability to manage everything the way we manage the internet. And I think we are on parallel paths in that respect. They do a great job at data analysis and we do a great job of the data capture, so now we are both on this great journey of discovery.
A: What were your impressions of REAL 2015?
S: I was pretty impressed by the scope of the subjects and people, and I was surprised at how many others were using reality capture in the field. The first speaker, Tim Zaman, was very interesting. To me, his presentation questioned the root of what we value as a culture. We are always fascinated with originals, like the Rembrandt painting—which he essentially cloned with 3D technology. It amazed me that he was able to create such an accurate clone, and it was interesting for me to think about the cultural implications of that process coming into existence.
Another great part of REAL was meeting Jonathan Knowles. He’s a really fascinating guy who shares a similar background with me in that he also brought an outsider perspective and disruptive approach to underwater science. I’ve written countless grants. I write about 3D technology I’m using, how I’m using it, why I’m using it, and no one seems to get it. You could say I’ve been shut down pretty hard. But Jonathan gets it. He is open to new ideas, and understood right away what we are trying to do with the Hydrous Project.
A: What have you been up to since REAL 2015?
S: Since REAL, I went to Easter Island and conducted another expedition there. The timing was fascinating, because El Nino happened to be rolling through the area while we were there. As a result, we were able to sporadically capture coral bleaching in action over a period of 18 days. A similar thing happened in the Maldives, but we arrived a bit too late. Anyway, as you can imagine, conditions were really rough on Easter Island. It was also a hassle logistically speaking, as the Chilean armada ended up shutting down the port. So of the 18 days, we were only able to dive 6 days — which was a bit disappointing. But we did capture some excellent coral data, so it was worth it.
It was also valuable in another way because it made me realize that we need to change the structure of the Hydrous Project into a for-profit entity in order to cover costs and allow us to accelerate the capture of coral data. So that’s our other news — we will be incorporating by the end of August.
We’ve also been exploring interesting partnerships with a few different groups: OpenROV, who presented at REAL 2015, MakerCore, the Aquarius Underwater Facility in Florida, and a company called Blue Ventures.
A: How do you think of Reality Computing as it relates to the Hydrous Project?
S: Reality Computing is really valuable in that it allows us and others to access resources like coral reefs. Scientists are getting more accurate data about reefs, and we are able to offer immersive and visual educational experiences that have an emotional impact. Scientists are loving the data that we are capturing, and people are benefitting from having access to scarce resources like coral reefs. In addition, people are impacted by stories, so even if we have a visual component or experience like Virtual Reality, people might put it on for a second, but not take too much away from it. Reality Computing increases the capability for us to present visual data as a story, which is what really sticks with people. One project suggestion that has come up a few times is for us to create an interactive experience using our reality data. A classmate at Singularity University had an idea is for a fun animated experience. He said we could anthropomorphize elderly coral reef and ask them what’s been happening to their coral reef neighborhood over the years (laughs). We could use great voiceovers to tell their story and so on, which would be a lot of fun.
A: What characterizes a healthy coral reef?
S: A good reef is able to thrive under natural conditions, and recover from natural stressor events. Storms and tsunamis damage corals, but a healthy reef can recover pretty steadily. Also, a healthy reef is characterized by a balance of what’s called the inverse pyramid. In other words, the more top predators like sharks and big fish you have near a reef, the healthier the reef. These top predators are like the lions of the ocean, they regulate the sickly fish and help drive natural selection.
In an unhealthy reef you won’t see these predators. This is due to man’s overfishing down the food chain. In the process, the big fish are most desired, so are fished away first, then the next biggest, and so on. This leaves tiny fish like chums as the top predators, which throws the reef ecosystem off balance.
A: How did scientists measure coral reefs in the past?
S: Scientists used in situ methods, which just means methods that can exist out in the field itself. Scientists were limited to using measuring tapes and other things that estimate near complexity, as well as photographs. There were a lot of ways to measure coral ex situ. For example, scientists could break coral reef, take samples, and bring it into a lab for testing. The holy grail of science is do everything in situ, without damaging or affecting any object of study. The traditional method of measuring coral comes from examining 2D photographs. This left a lot of data gaps because scientists weren’t able to create large, accurate 3D models. Having these 3D models allows us to ask bigger and more direct questions.
Now, scientists like me have the option to use powerful 3D software which is part of this new technology boom called Reality Computing. Specifically, I’m using Autodesk’s Reality Capture software. By actually capturing what we see in reality, we have gathered real quantifiable data for scientific use. Instead of doing an interpretation, or a modeling, we capture reality using relatively simple instruments. Right now, I’m using a consumer level DSLR camera, a Canon 5D Mark III, to take underwater photos that the 3D software meshes together to create 3D models. Hopefully we’ll be upgrading our equipment this year.
A: What do you hope to accomplish with the Hydrous Project?
S: From the smallest scope to the largest, in terms of coral reef science and monitoring, this could be a very interesting approach for marine managers and people that are interested in tracking coral reef to adopt. This methodology offers a rapid, non-invasive, and visual way of knowing how well corals are responding to their environment. Coral degradation, coral bleaching – the loss of a coral’s beneficial algae —and coral disease are on the rise around the world. Now that we can track and interact with visual data, and we’re working with developers to create immersive virtual reality learning experiences. This is the second major goal of the Hydrous Project, which is to increase access to these innately obscured resources. Very few people will be able to see a reef in their lifetimes, but this will hopefully give them some insight, access and an emotional awareness about coral reef that they would not have had otherwise. Hopefully, it will make an impact.
A: Are you planning on coming to REAL 2016?
S: Yes, absolutely! I’m looking forward to it, and I’ll see you there.
A: What were your impressions of REAL 2015?
F: It was a great conference! It was interesting because it was strongly oriented towards capturing reality data and computing, and less about the create process for me. When I went to REAL, we were just starting work that required a lot of scanning, so it was perfect timing. The people I met there were a great source of information, and I’ve been experimenting with some scanners and analogous equipment that I learned about during REAL 2015.
A: How does the Reality Computing play into your work?
F: I think scanning and being able to capture 3D dimensional information is going to help us with user adoption. On our end of things, it’s very helpful because it allows us to inform a design. We wouldn’t necessarily use the whole scan, but we would process that scan for certain kinds of information that we would then use to make something fit better, or respond to a certain kind of context or environmental condition. So for us, the scan gives us great data to analyze, as well as meaningful metrics.
A: What have you been working on since REAL 2015?
F: We’ve been working with a company called Volumental, and another one called Body Labs. The reason is, when we get a full scan back, it has too much information. For example, we’re working on shoes right now. Part of the idea is to create shoes that will fit everybody without having to get a standard size. If we get a whole scan of a foot, it’s too much information for us to be able to dynamically process that model. We can’t take that information and make the shoe on the fly in an app, or on a server somewhere. So what we’ve noticed is that we need to partner with people who are able to process that scan into the dimensions that we need for our design. A shoe is not your foot. You’re not covering your foot with fabric. The scan contains all kind of information, and we need to filter out the excess information that has nothing to do with our design. Those are the kinds of things I’m really interested in. The question becomes how do we filter out what I see as just too much data.
A: What drew you to 3D printing as a method for design?
F: When I started the studio, my intention was to use flexible tooling, or anything that doesn’t use dies or molds. What 3D printing really allows you to do is play material scientist. You’re getting down to very fine levels of control. As a designer, my interests have shifted to being able to manipulate materials. We are still working other materials like film and metal, but 3D printing is still very much what we do.
A: Are you interested in going to REAL 2016?
F: Yes! Absolutely, it was a great conference, and I would love to go back. I have some news that I can’t share with you now, but you’ll be hearing about it after this year’s Fashion Week.
Description: Bruce Beasley is recognized as one of the most innovative sculptors in the world and his works can be found internationally in the permanent collections of over 30 museums.
Andrew: What were your overall impressions of REAL 2015?
Bruce: I thought it was great because they really understood this idea that different elements of the design, capture and build are all related even though they are generally thought of separate and distinct from one another.
And as a sculptor I’ve been pursuing Reality Computing since 1987. I was, I think, one of the first sculptors to use 3D CAD in making sculptures. That was real early in ’87. And for me, the original drive of getting interested in CAD was simply to achieve freedom of shape. As it turned out, I was able to achieve freedom of shape. But the problem was that those shapes and sculptures remained existent inside the computer. So the next step was to free them from the digital confines of the computer and make them physical and real.
Even though the geometric sculptures I was able to design into the early CAD software were complicated and planar, I could — with some unfold programs — get those shapes out of the computer. But as soon as I got any shapes with compound curves then they didn’t lend themselves easily to getting out of the computer. And this led me to 3D printing. I was an early adopter in my field for that.
3D printing limitations showed me that I had an immediate problem of scale. Since then, I’ve gotten involved in having some of the biggest 3D printers ever made and I’ve been using them to create some very big bronze capping pieces. For me it’s just been a continuum of the search for freedom of shape, freedom of scale, freedom to basically make anything that I can conceive of. I’m currently making sculptures that I could not make without digital technology. The sculpture vocabulary that I can speak in is now as broad as my imagination can be.
Andrew: What are you looking forward to most about REAL 2016?
Bruce: Well, at REAL 2015 I met a lot of really interesting people. I think one of the phenomenal things about REAL is that you realize how many really interesting people are doing really interesting things. And there just isn’t anything else out there that allows us to hear about what other people are doing and talk with them in person. I mean, there’s websites and stuff, but I think that REAL achieves something really useful in the sense of needing people to create and collaborate. It’s becoming more important than to experience the cross fertilization that was going on at REAL. You go there and see a lot of things that are like, ‘boy, that guy’s doing something really cool there. And there’s a little piece of that that would really be a big deal for me.’
I’m looking forward to REAL 2016 for the sense of excitement and energy of seeing other people that are busting their ass trying to innovate. It’s energizing to all the rest of us to know that we’re not alone in our little shops.
Andrew: In 1987, what specifically drove you to explore CAD in order to achieve freedom of shape?
Bruce: If you look at the Bronze |1987-Present pages on my site I can tell you that those shapes came out of a desire on my part to breath life into elements that alone, are just simple rhomboid shapes that carry no human feeling. Could I treat those the way a composer treats individual notes on a musical scale? I mean, after all the piano’s available to anybody. Anyone can go plunk notes and they’re just – they’re individual sounds. They’re not music yet. It’s the combination of those notes, the repetition of them, the chords they make when they’re played together that turn something which is really just noise into music.
So I wanted to take simple rhomboid shapes that, again, alone were not of interest and combine them, intersect them with each other, to make shapes that seem very – a different kind of totality than the rigid geometry that the rhomboids themselves have alone. At first, I started by trying to make these by cardboard model. And one of the issues is that I’m not an artist that sees a finished piece of my head. I want to be able to literally explore the form and I sort of feel like discover the sculptures.
So what I would need a way to do is to try a whole lot of what I call “what ifs”. Well, what if I bring one block here? What if I rotate this one? And I was doing that by making a cardboard model for every potential change. And each of those was five to six hours of what I’ll call visual experiments. And so I was spending – I mean, I wanted – I was frustrated because I knew I was onto something that was leading to a sculpture vocabulary that was really me. But it was just too cumbersome.
So that was when I said there’s got to be a way to do this. And that was very, very early in computer CAD. But with the help of Don Glaser, a Nobel Prize winner in Physics at UC Berkeley, I was able to get to use of one of the early high-end computers and computer programs where I was able to interactively move these shapes into each other.
And it was great because I didn’t have to say, okay, I’ll move this ten degrees on the Y, seven degrees on the X. I could move them freely which as an artist was great for just being able to immediately see the results. I didn’t want to turn it into numbers. It allowed me to see these compositions on the computer and try literally hundreds of tries for any single one of these sculptures up to probably 100 or 200 iterations.
Andrew: In what ways did this early CAD program enhance your creative process?
Bruce: What I was able to do was suddenly have a tool that was very sophisticated but very spontaneous. And I could play with these intersecting shapes. It was like playing God in outer space. I didn’t have to cut things out of cardboard, glue them, and hold them together while the glue sat. I mean, it was just – it was wonderful. I certainly had this magic world where there was no gravity and I could just push one shape into another and rotate it and take it back out, extend one side and so on.
This led to a whole new vocabulary of shapes.
Andrew: Could you elaborate on what exactly led you to explore 3D printing as an option for digital fabricating your CAD designed sculptures?
Bruce: Okay. So, at the time, it was possible to write a program that would in effect unfold the shapes I was creating. And I could cut the patterns of each face. So that would give me a pattern that I could print onto cardboard or any material and cut out. They were numbered and I knew where they joined together.
And that was just perfect. But of course, I started getting interested in other kind of shapes. And I got interested in the shapes that had curves and compound curves. And that then that was it – I couldn’t get these new shapes for sculptures out of the computer for the same reason you can’t make an accurate flat map of the world. It’s a compound curve.
Now I had freedom of shape on the computer but I didn’t have freedom of shape back in the real world because I couldn’t get these sinuous compound curve shapes out of the computer.
And that led me to get involved in 3D printing because that is exactly what 3D printing is aimed at. It can make anything that’s on a computer file no matter what kind of overhangs or undercuts you have in your shape.
Some of the early 3D printers were very expensive with small build envelopes, but I was determined to use them so I bought a few of them.
Now I had freedom of shape on the design. I had freedom of shape back in the real world, but I had given up freedom of scale and freedom of choice of material because it was all white plastic. They were big for 3D prints, but still, as sculpture goes, they were smaller than I wanted. I had design freedom and the ability to make my digital ideas real, but I had constrained myself with material and scale.
So then I said to myself, well, I’ve just got to invent another way to do this. I began to think that if I can print what will go directly to the foundry, then I can print sections in a larger size and save an important step.
So in the normal process that would happen in the enlarging a sculpture, like let’s just say one of these pieces like – any one of the Coriolis pieces – you would have to somehow get a positive in some other material like plaster or clay or anything and enlarge the exact shape you want. Then you have to get a rubber and plaster mold made in order to get a hollow wax form that can go to the foundry and be cast.
Well, the process of enlarging any of these was really problematic.
The Coriolis series, for example are actually very complicated shapes. Although I hope aesthetically they have a certain unity in their movement and all, but they’re actually very complicated. So you wouldn’t be able to – even as a skilled technician – enlarge one to say, six or seven feet just by measurement. You’d never get it right and it wouldn’t have the visual integrity that the 3D printed pieces have.
The only other way to do it would be to machine mill it in a high density foam. But it can cost up to $25,000.00 per piece. With 3D printing, you never have an enlargement. You never have a rubber mold. You 3D print the hollow part of what’s going to go into the traditional bronze casting.
But the first step was finding out a material that would burn out cleanly. And that’s not a trivial question because traditionally, the ceramic material that the bronze is poured into surrounds what would normally be the wax. And that melts out and burns out cleanly in an oven without the emitting toxic fumes and that kind of thing. But most of the plastics do not burn out well and have very toxic fumes which produce smoke that the foundries can’t deal with. So the first step was to say is there a material that will work in the 3D printer that will burn out cleanly? And it turns out that PLA, because it’s corn-based, and not petroleum based, burns out cleanly. That was a good candidate for replacing the wax. Then I realized that I needed to have a really big 3D printer to pull this off. And that just simply didn’t exist. So I found a guy down in Anza, California named Leonard Dodd, who was talking about making a very large 3D printer.
Leonard is a very interesting guy from the custom race car world and he said he could make the real big one. And I believed him. So we worked together – I mean, he built it, but to my specs. And so we came up with a printer that’s four feet by six feet by four feet build envelope. Now I could make the enlarged hollow portions for bronze casting with greater ease and less cost than my other options.
Andrew: How do you think your process fit in with some of the other examples of Reality Computing shown at REAL 2015?
Bruce: I think one of things that made Carl want to have me on at REAL was that in a way I’m a metaphor for what they’re hoping it will do for bigger institutions, businesses, and industries which is to really change what can be done. It’s not just about saving a little bit of money, it’s about being able to make something you couldn’t make otherwise.
Description: Bastian Schaefer is an Innovation Manager at Airbus Operations, project leader of the Bionic Partition project which focuses on multi-objective optimization approaches, and was a member of the team that created the Airbus Concept Plane & Cabin
Andrew: What were some of your top learning experiences at REAL 2015?
Bastian: For me, it was quite amazing to see how 3D printing, 3D capturing, and 3D modeling are driving very different kinds of industries at the moment. Not only are the aviation industry and companies like Airbus involved, but institutions like museums are using these new technologies to create something different and enhance the user experience.
Andrew: What would you say to those who are going to REAL 2016?
Bastian: Well, I wish had gotten to see the talks on synthetic biology at REAL 2015, but unfortunately I missed this. I hope that we can learn more from scientists who are doing research in the field, and I would love to create a project based around synthetic biology. I guess it was Craig Better who wrote, “If physics was the science of the 20th century, biology is the science of the 21st century.” In my opinion synthetic biology is the next step. We can learn more about DNA, how to decode it, how the information is embedded in DNA, and basically achieve access to its source code. This will help us make new things with it, and can help us improve our aircraft in the future.
Andrew: How does 3D scanning play into your conception of design?
Bastian: We already have some projects underway where we capture the progress of our final assembly line for quality reasons. I guess this is a common application of 3D scanning as it’s used in larger manufacturing facilities to enhance the quality of production. Additionally, some of my colleagues are working on system of scanning airplanes before they take off and start their journey. Before a flight, the pilot reviews the condition of different items and parts of an airplane. We are working on a method where the pilot can use a drone, which is flying autonomously around the airplane, to checking any part of the aircraft. In terms of our concept cabin, we talk more about scanning people. By scanning the eyes of the passengers, for example, you have access to the biometrical information of the passengers which can basically double as their boarding passes.
Andrew: What are some common design challenges you face with 3D technology?
Bastian: With 3D scanning and 3D printing, a common obstacle is basically trying to figure out how can you turn the scanning data into code or an algorithm and make a real product out of it. How can you bring the scanning results into math? This doesn’t affect the scanning technology so much, but if affects the question of what to do with the data. Pattern recognition — which is also linked to the ongoing discussion about 3D scanning — will be a key enabler to understanding the biological world and creating new algorithms.
With 3D printing, we are looking for larger and faster printers, not only to be able to build bigger components, but to able to reduce the cost of manufacturing using this technology. With respect to the cost of metal 3D printing in particular, there are concerns about being able the 3D print larger components on a large scale.
Andrew: Are you working on a large 3D printer or scaling up 3D printing at Airbus?
Bastian: We are engaged in two approaches. One is building a larger 3D printer, but this is kind of a fun project to understand how this printer and its software work. At the other end, we are already collaborating with some of the larger 3D printing companies. We are writing specifications that reflect the requirements we have on a new generation of 3D printers and are hoping to get larger 3D printers in the near future.
Andrew: Describe the process of designing the Airbus Concept Cabin.
Bastian: To tell you the truth, the Airbus Concept Plane, and the Airbus Concept Cabin’s bionic structure are very much driven from an industrial design point of view. We did our homework and created an optimized structure. We performed topology optimization by adding main load cases and airframe data, fed it into a 3D software, and let the software come up with a result. We believe that this knowledge and approach will become part of the DNA at Airbus.
Andrew: How does generative design play into your work? Did you use evolutionary algorithms in the creation of the “bird-bone” like structure of the airbus concept plane?
Bastian: We do not work so much with different kinds of permutations that you know from the natural world, we use mathematical formulations which reduce the overall weight by only applying material where it needs to be, and the result ends up looking similar to the structure of bird bones.
Andrew: How do you approach materials research and simulation?
Bastian: At the moment we are focusing on certifying existing materials we use, such as titanium, steel, aluminum alloys and all different kinds of plastics. We know that the materials we get from those printers that can mix materials are not certifiable at the moment. In terms of brand new materials developed through computer simulations, we look to research institutes around the world such as the Fraunhofer Institute. We encourage them to come up with new alloys and new material compositions which we then investigate in our own research laboratories before we make a decision about them. Our facilities are primarily concerned with certifying existing materials and verifying specific properties.
Andrew: What kinds of metal 3D printed components are already incorporated onto Airbus planes?
Bastian: In one of our test airplanes, the A350 MSN5 has a bionic bracket which is 3D printed from titanium. And this plane is flying today. When it comes to incorporating more 3D printed components into airplanes, it not so much a technical question as it is a matter of getting your suppliers to go the extra mile with you. We have a small team here at Airbus that really wants to roll out this technology, talk about it, create design rules and so on. By doing these kinds of small projects, we’ve already recognized that they have a large impact and outcome. Our engineers are full of ideas, and there is much more to come.
Andrew: In your opinion, how do you think synthetic biology will affect the design of a plane in the near future?
Bastian: In some areas it’s not to far away. Today, synthetic biology helps us create new types of biofuels. My interest is more about the structures of DNA, how the structures are encoded in DNA, and how structures can change when you manipulate the DNA. This will push us into a completely new world where we can expect to discover new kinds of algorithms, which we can then use to execute new stress simulations and create better and lighter components.
A: What were your impressions of REAL 2015?
J: It was really interesting. There was an interesting mix of start-ups and multi-billion dollar companies, which was cool. I was a bit of a minority there, as I focus on high-tolerance aerospace. But I enjoyed what was obviously a collaborative effort.
A: What have you been up to since REAL 2015?
J: A lot (laughs). I’ve getting into a lot of robotics, and a lot of large-scale manufacturing automation cells for aerospace. I can’t really say for who, but I am working on getting permission to release more information at the present time.
A: Trying to measure Kapton was really fascinating to me, because it seems so difficult and is so crucial.
J: Kapton is a material that’s going on a spaceship, and is miserable son of a gun to measure, mainly because you can’t touch it. It’s infrared absorbent, which makes it difficult for light radar systems to measure it. Technically, only one flight-layer has been measured so far, and the rest will be measured next year. It’s one of the hardest materials I’ve ever had to measure because its main purpose is to absorb the infrared spectrum from the sun. If sunlight bounces off of this material it messes up the field of the optical sensors, the data will be screwed up. The laser radar system was more like a laser tracker. It takes what’s called a patch measurement, a surface vector intersection. It takes a little patch around the vector, sets a plane through it to make sure nothing is going in or out at a specific range, and you want to come in at around 25 microns.
A: What else have you been working on since REAL 2015?
J: I’ve sold a software system to a major space company, but I can’t tell you which one. It’s a software for quantification of all of their critical parts. I’m talking with the company now to get a release to do a presentation. My major project was getting this and the robotic system going for them. Many people do not realize how critical some parts are. You can’t even touch some parts internally. Take something that’s design to handle a high amount of pressure and is made of aluminium. The acid on your hands will actually cause oxidation. Instead of having humans do it, I’ve created a pilot lights-out robotic system for handling and testing these highly sensitive and critical parts. When you’re pushing 13,000 lbs of fuel through it per second, the part has to be handled with extreme care.
A: You have a unique background in that you literally grew up in the aerospace industry.
J: I came from Janicki, which was a family run business. My uncle Peter taught me very young to work with different components. From there, I got hired on to a company called Metris in Belgium, which is now owned by Nikon Metrology. There I was working on laser radar system on special projects like the James Webb Telescope, worked on a fuselage mating system, and did some work on a 2 meter jet. After that I actually owned a company called Vesper Global, which was the sole quality assurance outfit for GE’s wind turbine blades at the time. I got tired of flying 300 days out of the year, pulled out of that, and went to Verisurf software, where I ran a software development team for 5 years until I started Minds Mechanical in order to figure out how to actually quantify all the data that everybody takes and feed it back into the full product life-cycle management system. Calling a part good is fine, don’t get me wrong, but using that information to be able to sell higher tolerance machines for more money made sense to me.
A: What is your hardware and software training like?
J: I think I’ve basically used every piece of metrology equipment in the world. At Verifsurf we connected into over 130 custom devices and CMMs (Coordinate-Measuring Machines). I’ve used pretty much every metrology software, and I actually do some consulting for investors who purchase some companies. I can’t name any right now, though I’m working on getting some clearance. But that always takes a while.
A: What are some of the challenges a metrology startup faces when maneuvering in a highly regulated industry such as aerospace?
J: Take my company for instance. One of my major competitors is Hexagon Metrology, which is a multi-billion dollar company with thousands of developers and tons of software. And all I know is that they are trying to do what I do. The issue they will run into is that they are since they are so internally bound by regulations, they won’t be able to move fast enough to do things in a critical time period where a specific service is needed. Once they recognize that it is needed, a major corporation can take years to accomplish a project. This is where a smaller company that works with everyone but has no ties to anyone can make a better product at the end of the day because it works with everyone else’s products.
A: Are there any companies out there that fit the mold of your startup?
J: There’s a company out of Seattle called VR Mesh. They do large scale data-mapping, kind of like what Autodesk and Faro do, with fantastic algorithms. They’re a really young company, they have fantastic math skills, and no ties to anybody. I actually looked at buying his library at Verisurf, but he wanted a lot for it. One of the majors could buy them up and take leaps and bounds forward. Another one of the major points of my software is to get away from the abundance of proprietary software in the metrology world. You can’t bring in a Polyworks file, a Verisurf, or a SpatialAnalyzer file into anyone else’s software, which is unlike the CAD format where you can use an importer. But this leads to silly problems like if Boeing Marietta and Boeing Fort Worth, which use different software (one uses Verisurf and one uses SpatialAnalyzer), want to send a file to each other, they literally can’t open it. I want the user to be able to upload the data into the system, upload the model and have people be able to view it on the cloud, which will save companies like Boeing and Lockheed a lot of money on expensive software seats.
A: Where do you see the convergence of 3D technologies affecting your work in the near future?
J: Large volume, speed and accuracy in Reality Capture. Currently, most of metrology is done using either a hard probe CMM or a laser tracker – basically in aerospace, it’s a probing system. There’s a lot of human interaction with those. Large 3D Reality Capture will get more accurate. When the Faro laser tracker first came out, they were accurate within a millimeter. The more data you can take at one time or in a short period of time, means you will improve your accuracy. Currently, inspection takes 5 or 6 hours which can diminish accuracy due to thermal changes and other factors.
The faster you can take data, the more accurate it is. If it’s as accurate as one of the long inspections that exist now, then you will be able to capture the best data in the fastest time. I see this coming in the next ten years.
Description: In 2006, architect David Benjamin founded The Living, a firm which employs a number of unique environmental and computational approaches to creating buildings and structures.
Andrew: What were your impressions of REAL 2015?
David: I was there for most of REAL and I found it to be a really interesting and exciting event, not only because of the different panels and the different presentations but also because of the really high powered audience. Ironically, I spent a lot of time just talking to a lot of people outside of the actual presentation with a couple people who I knew and then some people who I had just met there or had been introduced to. I remember thinking that compared to some other events, it had such an interesting audience and community where almost everyone there was doing something interesting related to the general topic. So it wasn’t like there was one group of people who seemed like they were there just to present, it seemed to me that everyone either was a presenter or could have been a presenter.
Andrew: Are you interested in going to REAL 2016?
David: Oh absolutely.
Andrew: What have you been up to since REAL 2015? I know that is a general question, but are there one or two things that you would want to share with the REAL audience?
David: One that I can talk about is another project that may not be exactly within the realm of Reality Computing or the thing we always think of, but I will just describe it to you and you can decide what fits and what doesn’t fit. We are designing a laboratory building for Princeton University, a small kind of industrial flexible building, but one really interesting thing about it is that it is going to be the home of the Princeton School of Architecture’s experiments with what they are calling embodied computations. It is actually going to be called the Laboratory for Embodied Computations and it will house experiments on robotics, and also sensors embedded in buildings and all sorts of full scale prototypes of new architectural technologies. I think the interesting place that the project intersects with a broad definition of Reality Computing is through the Internet of Things. As more and more sensors get embedded in buildings, control systems will have a lot of different data coming together from local building sensors as well as remote and global information centers. I think there will be some really interesting ways to define what buildings can do and how buildings can perform. This building we’re designing will be a kind of center for setting up that kind of thing and mocking it up and testing it out. So it has been fun for us to be designing a building that is really meant to be a research instrument where the people inside can perform all kinds of experiments on new building technology. One way we’ve described it is that we are trying to build an “open-source” building, a building that has a baseline level of functionality and performance but that is designed to be written to or updated over time, designed to be swappable and modular. An example of that is that the heating and cooling system. The HVAC will be functional on day one but it is intentionally designed to have multiple ways to operate. It is also designed to have plug and play heating and cooling units. There will be a lot of study through sensors — which is a version of Reality Capture — will let us know which systems perform in which way at a pretty high resolution. It won’t be like taking one temperature reading or one humidity reading or one presence of people reading per space. It will be taking multiple readings in a higher resolution. Another way that it is modular, flexible and open ended is that about two-thirds of the building will be enclosed like a typical building, but 1/3rd will be an open frame that is designed to host full scale prototypes, façade elements and roof elements. This will allow for a version we’ve described as having a “quick release panels building envelope”. This means that you can plug in a certain new prototype through a sort of bicycle quick release system, observe how it performs and then pull it out and swap in a new one. I think it could be argued that this is the perfect time for that kind of system and experiment in building because before we didn’t have as easy access to sensors. In the past, sensors weren’t as cheap and we didn’t have easy access to powerful computing and other information about other buildings. So now is the perfect time for us to actually study a building in this way and perform real-time experiments on a building in this way.
Andrew: How important is collaboration to your concept of Reality Computing because it seems to me that it is becoming more obviously integral to real progress and true innovation. As all of these different 3D technologies converge it seems as though it’s almost impossible to push things forward without collaboration.
David: Yeah, I totally agree. To me it’s loosely similar to the combination of our current issues and problems. I think of issues and problems like climate change, running out of raw materials and global energy. So I think for those kinds of problems and also for certain new areas of technology collaboration is essential. It’s almost the only way to do things that are relevant and interesting. So another way to think about it would be that these problems and these issues within the domain of Reality Computing are so complex and so multi-faceted that it really demands teams that are kind of interdisciplinary. It also demands multiple perspectives. It is almost not possible for any single person in any single discipline to have a complete handle on everything involved and so I just think it is natural for there to be teams collaborating within multiple disciplines. It can also be stated that these new exciting domains within reality computing can’t really be addressed by any of our old definitions of existing disciplines. So Reality Computing is certainly relevant for architecture but it can’t really be completely addressed by our old definition of architecture, and likewise, it can’t completely be addressed just by computer science, etc. So I think it is an exciting time in that way, and it has been interesting and fun for me because the work of my studio has always involved collaboration— we have really always loved working in that way.
Andrew: Can you talk about your ongoing collaboration with Airbus?
David: It is certainly safe to say that we are collaborating with Airbus and thinking about new airplane components, new workflows for designing airplane components and understanding that this involves generative design, optimization, 3D printing, and new materials. It also involves a lot of Reality Capture, some very intense computing and data analysis. Based on that information we are not only trying to create innovative new parts but innovative new ways to produce parts. We are now just talking about when and how to publicize the project. When we describe the project we would kind of show everything and all of the aspects related to Reality Computing.
Description: Hao Li is an Assistant Professor of Computer Science at the University of Southern California and the CEO of Grit Enterprises Inc.
Andrew: How does your work intersect with Reality Computing?
Hao: I actually began collaborating with Autodesk’s Reality Capture Lab in San Francisco where we had a lot of discussions and I got to use their equipment to process some reality capture data. Before USC, I worked at Industrial Light & Magic for a year, which was actually a 10 minute walk away from Autodesk’s Reality Capture Lab. They had this multi-view camera system in 2011, which was exactly the thing I needed to do all of this hair reconstruction work.
Andrew: You’re going to be presenting at REAL 2016. What are you planning for your presentation?
Hao: Yes, I’m excited for REAL 2016 because it seems like a unique conference where physical and virtual worlds come together in a necessary way. For my presentation, I’ll be providing an overview of what people are doing in the computer graphics research community for starters. I will certainly talk about creating virtual humans in a deployable environment. I’ll talk about possible ways to create an entire full human body with clothing and high quality hair from a rough 3D scan. My goal is to show that it will be possible to bring high quality digital human creation to the mainstream. But things are changing very fast, so I might be talking about something that I haven’t produced yet.
The other thing I’ll talk about is talking about different interfaces to an immersive world. It can be VR, AR, graphic displays and so on. We don’t want to just create something similar to video streaming. It should be something more intimate —exchanging eye contact and really subtle facial expressions as well as gestures and so on. I want to talk about my work and research but also show what is actually possible, what challenges there are, and what misconceptions people have about Virtual Reality and capturing in 3D. I think it’s interesting to expose some of challenges and provoke some interesting discussions hopefully.
Andrew: What was your work like at ILM?
Hao: It was interesting. I was only there for a year, coming from postdoc program at Columbia and Princeton. I was just about to start as a faculty member at USC, but I decided to first spend a year at ILM. Everybody at ILM was working on technologies for movies or VFX, but I joined a new group that was developing a lot technologies for doing real time virtual production — including techniques for capturing human performances and wrap them in real-time onto some characters. The overall idea was to generate 3D content really fast, and I’m sure they’ve been doing some interesting stuff since I left. Overall, the idea was really about replacing old-school film production pipelines with the ones from gaming. Of course, with VR and AR there are a lot more opportunities for the growing number of people who are consuming 3D content. But while I was working at ILM, I would spend part of my time working on hair reconstruction with some of the hardware at Autodesk’s Reality Capture Lab.
The idea is to explore answers to the question, “Is it possible to empower consumers to perform reality capture?” I mean, anyone can create high-quality video content with their go-pros or iPhones — the cameras on these devices are just incredible now. So we are trying to figure out what type of systems and algorithms will allow 3D scanning to become as ubiquitous as amateur videography.
Andrew: What are some of the obstacles you see in developing systems that would empower consumers to produce 3D content the way they currently produce videos?
Hao: The whole idea having 3D as a central representation in terms of producers and consumers represents an interesting shift. Producing great 3D content on its own is still very difficult and there is a lot of work to do in order to make producing 3D content viable and simple for consumers.
Changing the whole computer graphics pipeline in order to facilitate an increasingly large amount of user-generated, high quality 3D content requires asking a lot of questions: How do you recognize 3D objects? How do you process them? How do you manipulate them? How do you localize yourself within a physical 3D dimensional world?
For example, if you want to use an augmented reality application where you can enhance certain physical content with digital content, you need to have an understanding and a three-dimensional representation of the space around you. How do you reconstruct that three-dimensional world? I think one of the most promising things are the sensors that you have in mobile devices. If they are equipped with 3D scanning devices, it’s going to affect object recognition and data collection. It will affect how you are going to interact with whatever your device is seeing. 2D information is limited, and an object can appear damaged. If you have geometric information like a point cloud or 3D mesh of something, then the appearance information becomes a lot more accurate. You also have algorithms that perform a lot faster if you have geometric information. Overall, it’s a big challenge that raises a lot of interesting questions and avenues for experimentation.
Andrew: So current 3D scanners are not powerful enough to generate 3D content that would be suitable for the applications you have in mind?
Well, I can scan an object Kinect Fusion or a RealSense scanner, and scan an object, but I just get the surface. What you really want is a functional object. You need a high quality 3D model that can be animated, manipulated and enhanced by powerful software. This is where the research is moving toward. If you look at the SIGGRAPH conference, there a lot of new research projects from universities and companies. And a lot of the focus is on how to create semantically associated geometry.
For example, if you just scan yourself, you want to be able to animate it. Instead of hiring an animator to produce it, you want a system that is sufficiently intelligent to be able to use it yourself.
Another inherent problem of 3D capture is that the acquisition process of any object is difficult because it leaves you will an incomplete object. It doesn’t mean the process of capturing a complete object is impossible though. A lot of current research involves figuring out ways to complete the object, to fill in holes and missing portions. If you see something, you can sort of tell what is behind it. We are researching on techniques of creating and adding accurate yet unseen portions of a 3D model using databases, probablistic models and so on.
Andrew: Where does the 3D dimensional data in the databases come from and how do you reuse it?
Hao: Many structures are difficult to scan. Take pipes for example. They’re metallic, they might reflect the laser or whatever active sensor gear you’re using, but you can tell that it’s a pipe, so you try to use prior knowledge to fill in the missing portions. There is a lot of 3D data. You have image databases, 3D shape databases, motion capture databases. Researchers are using them to push the technology, but aren’t necessarily concerned with commercial applications at this point. It takes time for these things to make their way into industry. In controlled tests, it may work well, but when you release it to the world, it will become something different and the issues that may arise are sometimes hard to predict.
Andrew Hessel is a Distinguished Research Scientist with Autodesk’s Bio/Nano Research Group, which is developing software tools for designing living and nanoscale systems. He’s currently working to “3D print” customized cancer fighting viruses, starting with dog cancers. He is a 2016 AAAS-Lemelson Invention Ambassador, a fellow of the University of Ottawa Institute for Science, Society, and Policy, and past co-chair of the biotechnology and bioinformatics at Singularity University. He has written articles on genetic technology for The Atlantic, Wired UK, and Huffington Post and given over 100 invited talks on next-generation biotechnology.
Ben Davis is one of the lucky ones who found a way to blend personal passion and profession. Determined to work on projects that had “meaning,” he cut his professional teeth as the manager of public information on the $4b Boston Harbor Cleanup Project. In 1994, he moved to San Francisco and started an award-winning creative communications firm that specialized in working on mega-projects that leave a long-lasting positive legacy. He has named, branded, and led communications on infrastructure projects totaling more than $15b in civic investments. He is the visionary behind The Bay Lights project and is now the founder and CVO of Illuminate the Arts.
David Benjamin is Founding Principal at architecture firm The Living and Assistant Professor at Columbia University Graduate School of Architecture, Planning, and Preservation. The Living explores the architecture of the future by building full-scale, functioning prototypes today. Our projects bring new technologies to life in the built environment, integrating design innovation, sustainability, and the public realm. We believe cities and buildings are living, breathing organisms. And in the context of rapid change and new urban challenges, we propose that design should be a living, breathing ecosystem. Within this design ecosystem, we work on multiple scales simultaneously. We anticipate and welcome rapid change. We embrace design with uncertainty, design with rules rather than fixed forms, and design with shifting and unknowable forces. Clients include the City of New York, Seoul Municipal Government, Nike, Prada, 3M, Airbus, Quantified Self, Miami Science Museum, Eyebeam Center for Art and Technology, and Björk. Recent projects include the Princeton Architecture Laboratory (a new building for research on next-generation design and construction technologies), Pier 35 EcoPark (a 200-foot floating pier in the East River that changes color according to water quality), and Hy-Fi (a branching tower for the Museum of Modern Art and MoMA PS1 made of a new type of biodegradable brick). The Living was recently acquired by Autodesk.
Eythor Bender is an innovation and business leader in robotics, bionics and prosthetics. He is the CEO of UNYQ, a company using 3D printing to bring fashion and choice to amputees. Prior, he was the CEO of Ekso Bionics, where he unveiled the prototype for the wearable robot Ekso, which enables wheelchair users to stand and walk. He took the company from an R&D lab to a nimble commercial enterprise that recently went public. Before that, Eythor served as president of Ossur, the Icelandic designer of non-invasive lower limb technologies, and grew the company over 13 years from a start up to a global, publicly traded organization with over 1,600 employees.
Lieutenant Colonel Patrick C. Suermann, PhD, PE, LEED AP is a graduate of the U.S. Air Force Academy with a B.S. in Civil Engineering. After serving as a combat and stateside engineer, he earned his M.S. in Construction Management from Texas A&M University and subsequently taught computer courses for engineers in the Department of Civil and Environmental Engineering at the U.S. Air Force Academy. After successfully defending his dissertation and receiving his Ph.D. in Design, Construction, and Planning at the University of Florida as the first ever Rinker Scholar, Suermann deployed to Afghanistan to oversee nearly $1B in Air Force contingency construction. Later, he was the BIM SME, MILCON PM/Transformation Action Officer at the Air Force Center for Engineering and the Environment (now AFCEC) in San Antonio, TX. After a year as the Support Squadron Commander at Thule Air Base, Greenland; Lt Col Suermann returned to teaching at the Air Force Academy where he was appointed as Construction Division Chair in 2012 and promoted to Associate Professor in January, 2015.
Atul Khanzode leads DPR Construction’s Consulting and Construction Technologies groups. In this role, Atul leads many strategic technology initiatives related to virtual design and construction (VDC), operations and preconstruction, and works on some of the key consulting engagements for DPR. Atul also works with DPR project teams across the world to implement VDC methods and lean construction processes.
Atul has worked on some of the most complex healthcare, biotech, and advanced technology projects for DPR Construction in the last 17 years. Atul has shared his extensive expertise in VDC and Lean methods with many organizations, including the Lean Construction Institute Design Forum, Design-Build Institute of America, Center for Integrated Facility Engineering at Stanford University, AIA California, Open BIM Forum in Sweden, San Jose State University Professional Development program, University of California Berkeley Professional Education Program, Autodesk University, Bentley Connect and other forums focused on exploring issues related to integration, lean construction and VDC around the world.
Atul earned his doctorate in construction engineering and management, focusing on integrated practice, VDC, and lean construction, at Stanford University’s Center for Integrated Facilities Engineering (CIFE). Atul also has a Masters degree in civil and environmental engineering from Duke University, and a post-masters Degree of Engineer in construction engineering and management from Stanford University.
Mike Whaley has over thirty years of experience in the Architectural and Construction industry. Today he is President of TURIS Systems, LLC a technology consulting company for contractors, design professionals and owners. His background in every aspect of the A/E/C industry gives him a uniquely qualified perspective on the future of our industry(s). Over the last eight years, his visionary guidance and direction has empowered his team to the forefront of technology. Now his focus is on the implementation of Reality Capture and BIM and as always, what is next? In the last 18 months, he has spoken on a variety of topics about the future of technology implementation from New Zealand and Australia to North America and Ireland. In addition, he has published multiple articles on these topics and been the contributing author to two books on BIM.
Tim Zaman (1988) received his BSc degree in mechanical engineering and a MSc degree in biorobotics from the Delft University of Technology, where he specialized in computer vision. For his thesis he developed a 3D scanner for paintings that simultaneously captures color and topography. This scanner has recently been used to replicate the (2 square metre big) Jewish Bride by Rembrandt yielding a 3D pointcloud containing over a billion points. He is currently a part-time PhD student at the Delft University of Technology, developing novel imaging systems for cultural heritage.
Greg Downing: specializes in image-based 3D technologies and computational photography techniques for immersive environments. He has worked developed and executed Photographic/CGI workflows for a number of Fulldome, VR, and feature film productions while President at xRez Studio. His work can be seen in museums such as Ars Electronica, the American Natural History Museum and the Computer History Museum. His IMAX and Fulldome work has won or been nominated for awards at Jackson Hole Film Festival, with the VES society, and the 4K film festival. He has worked on notable visual effects films such as “I am Legend”, “Spiderman 3″ and “Narnia”. Recent work includes an immersive installation for Björk and a VR/fulldome film with Ai Weiwei.
Roman Hasenbeck is the Managing Director at Metaio, Inc. in San Francisco, California. He joined Metaio in 2009, and focused his talents on the European market while based in Metaio’s headquarters in Germany. Roman now oversees the San Francisco-based U.S. business operations of the world’s leading Augmented Reality company. He is an Augmented Reality evangelist and is frequently speaking at conferences throughout the US. Prior to working at Metaio, he held positions at KPMG Advisory and also worked at Egon Zehnder International. Roman holds a Master of Science in Management and Economics from University Pompeu Fabra in Barcelona.
Laser Scanning (Time-Of-Flight, Phase Shift, Triangulation), 3D White Light, LIDAR, SONAR, Aerial Survey, UAV Survey, Geophysics, Marine Survey, Laser Scanning, Close-Range Photogrammetry
Mesh analysis to optimization, computational design, interference detection, data management, artificial intelligence, recognition/machine vision and reverse engineering
Virtual Reality, Augmented Reality, additive manufacturing, robotic digital fabrication, CNC machining, co-bots and robots.
Experience world-class talks, ask questions, get your hands on a cross-section of 3D technology that will leave you with a clear picture of the future of making, the future of computing and the future of capturing new reality data.
This is no ordinary tradeshow or maker faire. This is a hands on think-tank where you are free to discuss new ideas and projects with those who are pioneering incredible uses of 3D technology and sharing their visions of what lies ahead. Ask the speakers and exhibitors questions. Engage with a very savvy crowd. You’ll understand what’s happening here, but you won’t quite know what to call it. We call it Reality Computing.
Includes REAL Talk (sessions, plenaries, breaks, lunches, and reception) plus REAL Live (demo hall)
Special rate for REAL2016 attendees at the Argonaut Hotel, Fisherman’s Wharf - 415-563-0800. Ask for “REAL2016” rate. - SOLD OUT
Student/Faculty/Govt/Non-profit Discount available on registration page