Robot-made Adidas shoes, coming soon!

Your next pair of Adidas runners might be made in America – by a robot. Yep, you read that right. The German sportswear giant revealed details about its Speedfactory, coming to the U.S. in 2017. The factory will be located in Atlanta, Georgia, and feature 74,000 square feet of robot shoe-making capability. It is aimed to be fully operational by the end of next year.

This allows us to make product for the consumer, with the consumer, where the consumer lives in real time, unleashing unparalleled creativity and endless opportunities for customization.

Eric Liedtke – Adidas Group Executive Board Member

The factory has an output capacity of 50,000 pairs of shoes per year, which is not a very big number, but is a fairly good start. The Atlanta facility will be Adidas’ second Speedfactory, joining the original in its home territory of Germany.  What does this mean for human workers? The company says that it will create about 160 jobs for people overseeing the factory in Atlanta.

Adidas has stated that the demand for customization is a key factor behind implementing the idea of Speedfactory. The first-of-its-kind business model advances Adidas’ consumer-centric approach to product creation, allowing the brand to decentralize production and react faster to consumer needs.

The Speedfactories have the ability to reconfigure production lines based on changes in customer demand. Programming the robots to focus more on one particular model, a different size or a different combination of core components is much easier and requires less transition time than with a human workforce. A robot workforce could give Adidas the flexibility to offer variety while maintaining high volume output.


Anisha Sawant

 

Our buildings are talking, it’s about time we listened.

You know how people often say “If only these walls could talk..”? Well, their prayers just got answered, sort of. We’re talking about looking at buildings around you as living, breathing organisms. Walls, ceilings, doorknobs that can observe, and learn from, everything that’s happening around them. IBM’s Watson has been doing all kinds of awesome stuff around the Internet of Things (IoT), and their latest development speaks of buildings that can think for themselves.

Watson IoT for buildings

Watson IoT for buildings is a complete solution for smart buildings that allows you to capture data from sensors and devices and consolidate that information on a single cloud platform. Using cognitive computing, Watson generates recommendations for you that can be operationalized by your asset and facilities management software.

What is a cognitive building?

The people and assets in your building create an enormous amount of unstructured data. The challenge is using that information to make informed decisions about how to optimize the experience of building occupants, staff and management. Watson IoT can help you tap new sources of data and apply cognitive capabilities to help you flexibly adapt your space to changing needs.

What can IoT do for your building?

Your building would essentially have a brain of its own. Instead of nerves, there are myriad sensors deployed in walls, lights, heating and cooling equipment, even the faucet in a sink. Creating a structure’s consciousness is software that ties it all together. But these brains, like any brain really, don’t just blink on and immediately understand how to function in the world. Like any growing person, the “brains” in our roads, office buildings, homes and factories will need to be taught how to think, how to problem solve, how to arrive at smart decisions.

Digital Twinning

Some buildings today are built from the ground up with nearly one IoT-enabled sensor per square foot—monitoring temperature, humidity, how many people are in a room, etc. Managers can then “see” the building on a computer, in what’s known as a “digital twin.” This digital twin will then build a model of the world it inhabits. With a full picture of what’s going on inside their properties, owners don’t rely on guesswork and history to make decisions, but real, actionable data.

“Buildings will know the context of how space is being used and predict the kind of load that will be required—is it a Friday when many people work from home, does tomorrow’s weather mean there will be extra heating or cooling required?” says IBM Research’s Dr. Joern Ploennigs. With all the data collected, sensors could alert the building manager when something’s going to break before it goes – imagine the wonderful convenience!

  • A business might have a conference room with an occupancy rate of 95% during working hours. With usage that high, it’s very likely that people are being turned away when they need a room to meet. The conference room may hold 12 people, but the sensors say the average meeting has just three. Cognitive analytics will suggest installing a wall so that the room can serve more employees more efficiently.
  • Deloitte’s Edge building in Holland analyzes an employee’s calendar and reserves the needed desks accordingly. Someone may start with a collaboration space in the morning to work through the latest client presentation; then have the board room reserved for that big lunch all-hands; and then shift to a quiet reading space to catch up on the new report for a second client. With so much of our work documentation stored digitally, Eva Fors – CEO of sensor maker Yanzi, says the idea of an assigned, personal desk will soon be a thing of the past: “You take a seat based on what you’re actually going to do.”

Watch this video to learn about how IoT will transform the way we live, work and connect.

Find out more about IBM’s plans for the future of buildings here.


Anisha Sawant

 

ARA you ready for the future of smartphones?

For a while now, the world has waited and wondered when Google would take the plunge and build its own Android phone for consumers, and directly take on the iPhone — there have been hints and leaks, but nothing concrete.

On Friday, at its I/O conference, Google announced that it’s moving the ambitious Project Ara modular smartphone team out of the ATAP research lab and into its own proper unit within Google, under new hardware chief (and former Motorola president) Rick Osterloh. Developer kits will be shipped this fall. And a consumer Ara phone is coming in 2017 as well (yaay!), marking the first time Google has ever built its own phone hardware — Nexus phones have been built by partners like Huawei, LG, and HTC.

What is Project Ara?

Smartphones have gotten lighter, faster, cooler, sleeker, smarter. But their average lifetime is still about 2 years, give or take. We’re forced to trade them in every time a single, seemingly important, part gives out. Or when there’s a phone with fancier features out on the market and we don’t wanna be left behind.

But imagine if we could only replace/enhance the modules we wanted? The kind of freedom this would give the consumer is nothing short of revolutionary. And Google, being the visionary we know and love, wants to give us exactly that. They want to revolutionize the smartphone industry by building a fully customizable modular smartphone of the future.

Google showed off a working prototype version of Ara, which lets you live-swap hardware modules like cameras and speakers onto a base frame which contains the core phone components. To add to the awesomeness, you could even say something like, “Okay, Google, eject the camera” to release modules. It has six modular slots — each one is generic, so you can put any module in any slot, and they’re all linked up through new open standard called Unipro that can push 11.9 gigabits of data in both directions. (For more on this, you could check out this piece from WIRED.)

This video is just a taste of something truly groundbreaking that’s on the horizon. And after all this time, it seems like Project Ara is actually coming together. And it could very well be the last phone you ever have to buy. Are you excited yet?


Anisha Sawant

 

Is AI getting ready to outperform humans? Zuckerberg sure thinks so!

Technology is advancing at a tremendous rate. Everything seems to be getting smarter – homes, cars, shoes.. you name it! A lot of attention is being focused on AI – with tech giants now investing heavily in incorporating bots into their services. But are we really looking at a not-so-distant future where AI will surpass humans in doing things that, well, make us human?

Recently, Facebook Founder and CEO, Mark Zuckerberg was asked about the future of the machine learning technology that powers its Messenger bots. Here’s what he said..

“So the biggest thing that we’re focused on with artificial intelligence is building computer services that have better perception than people… So the basic human senses like seeing, hearing, language, core things that we do. I think it’s possible to get to the point in the next five to 10 years where we have computer systems that are better than people at each of those things.”

Source: The Verge

 

Bit scary, isn’t it? The thought of computer systems with abilities that make us biologically human – senses and skills we rely on to interact with our world. Makes it seem rather inevitable that a wide variety of jobs currently done by people will no longer require expensive human labor, because artificial intelligence will be far superior.

However, Zuckerberg goes on to say..

“That doesn’t mean that the computers will be thinking or be generally better…”

That isn’t really a great amount of praise for the species he belongs to, but we’ll take whatever little we can get. Thinking here means something more complex than understanding how to carry on a conversation or tell a cat from a dog. It’s about being able to do several different things well, and learning on your own in an unpredictable and unsupervised environment. Generally better is being used in the sense that we are generalists, capable of handling a wide universe of tasks.

Source: The Verge

Human and animal learning is mostly unsupervised. We know how to program machines to do certain tasks, to follow a routine, to understand data and extract meaningful information from it. But we cannot train a machine to master unknown skills on their own. Until unsupervised learning cant be “taught” to machines, humans, as natural beings, will still have the upper hand. There cannot be true AI without unsupervised learning.

Zuckerberg sees bots stepping in for customer service reps and personal assistants.

“One way that I think you’re going to see bots work, between people who are actually driving the businesses directly will need to in some way train or answer questions for people, but we can build artificial intelligence that can learn from people how to automate a lot of that and make that experience a lot faster for people who want to interact with businesses and public figures.”

How should we prepare for this new reality? One possibility is universal basic income (more on this another time). The other is to redesign our educational system with a focus on new skills that will equip our future generations to find meaningful jobs in a world of advanced AI.

But when you look at the less-than-impressive launch of recent smart bots from Microsoft and Facebook  — they turned out to be slow (Messenger bots that don’t respond for several minutes), racist (Tay’s mean tweets) and in general, not the smartest — you can’t help but wonder if this is just another one of those AI hype rounds that will soon lose traction. What are your thoughts on this? Will machines ever really be better than humans, at being human? Leave us a comment below!

 


Anisha Sawant

 

Take a Magic Leap into the Future: Beyond Virtual Reality

Humans have always been rather keen on exploring a reality beyond their own. Virtual Reality has been so intricately imagined for so long, its availability to common man honestly seems a bit overdue. And companies are now working furiously to bring out gizmos that will make our inner SciFi geek jump up with joy. Yaay!

Tech giants have invested heavily in their VR departments, with headsets and developer kits already out on the market – Google Cardboard, Facebook’s Oculus Rift, Microsoft HoloLens, HTC Vive – to name a few. But there’s a tech startup that’s been working on something that transcends anything you’ve ever seen – be it real or virtual!

Magic Leap is a California-based startup that likes to work (mostly in secret) on blurring the line between what’s real and what’s not. Google was one of the first to invest in them, and several others followed suit. To date, investors have funneled a whopping $1.4 billion into it.

Optical systems engineer Eric Browy looks through a photonics verification test rig in Magic Leap’s optics lab.
Source: WIRED

That astounding sum is especially noteworthy because Magic Leap has not released a beta version of its product, not even to developers. Aside from potential investors and advisers, few people have been allowed to see the gear in action.

So what exactly do they use all that money for? Brace yourselves and watch the video, this is guaranteed to blow your mind!

What did we tell you? Jaw-droppingly cool, isn’t it? The folks over at WIRED did an awesome exclusive with founder Rony Abovitz and his team at this futuristic company, read more about that here. Leave us a comment and let us know what you think of Magic Leap’s incredible new technology.


Anisha Sawant

 

 

Check out Facebook’s latest gadget — it lets you shoot 3D video!

Yesterday, at F8 – Facebook’s annual global developer conference, Founder & CEO Mark Zuckerberg announced that they have successfully built a 360-degree camera called Surround360. The company has spent the past year working on a stereoscopic camera that will allow us to produce 3D video.

Zuckerberg’s adorable 4-month old daughter, Max, is reaching the age where she learns to walk; and when the time comes, this doting daddy wants to go all out. “When Max takes her first step, we’ll be able to capture that whole scene, not just write down the date or take a photo or take a little 2-D video,” he says. “The people we want to share this with…can go there. They can experience that moment.”

 

FB surround360

 

This is all part of Zuckerberg’s effort to move the Internet beyond text and photos and video to a new mode of communication. The hope is, what begins with 360-degree video will extend to the kind of virtual reality offered by the company’s Oculus headset. “Over time, people get richer and richer tools to communicate and express what they care about,” Zuckerberg said in an interview with the folks over at WIRED. “What’s next? You’re clearly going to be able to experience whole scenes, whether that’s captured through some kind of 360-degree camera or it’s computer-generated, as games are.”

Companies like Nokia and Google share his vision. But what sets Facebook apart is they’re giving the designs away – yes, for free – they’ll be posted to Github this summer.

Zuckerberg sees this as another part of the company’s mission to Connect the World. His idea is to build Facebook around this mission and try all sorts of things to get closer to achieving it. “The real goal is to build the community. A lot of times, the best way to advance the technology is to work on it as a community.”

We love this thought. After all, the greatest technological breakthroughs have come from people around the world, working together, focused on a common goal. How do you feel about Facebook’s new strategy to bring the world together? Let us know in the comments!

 


Anisha Sawant

 

 

Build-a-Bot with Microsoft!

“Our industry does not respect tradition, it only respects innovation”, said Microsoft CEO Satya Nadella when he was appointed. His tenure has mostly been about exploring Microsoft’s future beyond Windows. At the Build 2016 conference last week, he unveiled the company’s plans to bring the world of bots to “conversational platforms” – including Skype, Slack, Outlook, LINE, and more.

To demonstrate this system, Microsoft assembled a chatbot for Domino’s, showing how a conversational interface could replace the standard online ordering forms (e.g. selecting from a drop down menu to choose your pizza toppings).

 

 

According to Nadella, bots are the next big thing – and they make interacting with online businesses and services easier for users who don’t want to deal with the numerous mobile apps available today, or who are frustrated by navigating the endless sea of websites. Bots work better because you simply talk to them, using natural language.

Sure we’ve heard about such chatbots before, but Microsoft wants to give the tools to build these bots to everyone. Yaay!

 

 

There are two key components available in preview, and are both part of the larger Cortana Intelligence Suite. “The first, Microsoft Cognitive Services, is a collection of intelligence APIs that allows systems to see, hear, speak, understand and interpret our needs using natural methods of communication”, Nadella said. “The second, the Microsoft Bot Framework, can be used by developers —programming in any language — to build intelligent bots that enable customers to chat using natural language on a wide variety of platforms including text/SMS, Office 365, Skype, Slack, the Web and more.”

Though Microsoft’s “Tay” bot was quite a nightmare, the company demonstrated how artificial intelligence applications built with Microsoft technology can be useful in the real world. Most impressive right now is Seeing AI, an application to help blind people navigate the world, built by a blind Microsoft software engineer named Saqib Shaikh.

 

 

As seen in the video above, Shaikh uses Seeing AI with both a smartphone and the Pivothead smart glasses to get information about his surroundings. While outside, Shaikh taps the side of his glasses to take a picture of a man doing a skateboard trick, and a voice tells him, “I think it’s a man jumping through the air doing a trick on a skateboard.”

“The app can describe the general age and gender of the people around me and what their emotions are, which is incredible,” Shaikh said.

In another scene, Shaikh uses the smartphone app in a restaurant to find out what’s on the menu. “Years ago, this was science fiction. I never thought it would be something you could actually do, but artificial intelligence is improving at an ever faster rate,” he said.

Microsoft will help power these programs by providing what it calls “cognitive micro services” — little scoops of prepackaged intelligence that give bots the ability to understand natural language, for example, or analyze and label images.

“We want every developer to be able to build bots as the new application for every business and every service,” said Microsoft CEO Satya Nadella. “We want all developers to be able to infuse intelligence into their applications.”

We’re very excited by the idea of being able to build our own little bot! How bout you?

 


Anisha Sawant

Driverless Cars – who wants one?!

It goes by several names – autonomous, robotic, self-driving, driverless. Wikipedia describes it as “a vehicle that is capable of sensing its environment and navigating without human input”.

It is just that. A car that drives you.

How cool is that? One of our major sci-fi fantasies brought to life! (Knight Rider, anyone?)

Despite all the recent hype, the concept is rather dated. Experiments have been conducted on automating cars since at least the 1920s; promising trials took place in the 1950s and work has proceeded since then.

And although the basic idea is the same, there’s some key differences in terminology you need to understand.

Autonomous vs Automated

Autonomous means having the power for self-governance. Many historical projects related to vehicle autonomy have in fact only been automated (made to be automatic) due to a heavy reliance on artificial hints in their environment.

Source : Wikipedia

Autonomous vs Self-driving

Autonomous cars will look like the vehicles we drive today – taking over from the driver in certain circumstances.

Self – driving cars are a stage further on. The steering wheel will disappear completely and the vehicle will do all the driving using the same system of sensors, radar and GPS mapping that autonomous vehicles employ.

Source : The Economist

The first self-sufficient and truly autonomous cars appeared in the 1980s, with Carnegie Mellon University’s Navlab and ALV projects in 1984. The Navlab group builds computer-controlled vehicles for automated and assisted driving. Their latest project is the Navlab 11 – a robot Jeep Wrangler equipped with a wide variety of sensors for short-range and mid-range obstacle detection.

 

NavLab11

 

Since then, numerous major companies and research organizations have developed working prototype autonomous vehicles – from Tesla to BMW, Volvo and even Google.

And they’ve all got something unique to offer. Take a look below.

Tesla Autopilot

Very few cars are as beautiful as the Tesla Model S. There’s just something about the brilliant design and futuristic software that make it a clear favorite of technologists everywhere.

Late last year, Tesla unveiled it’s Autopilot feature – which allows Model S to use its unique combination of cameras, radar, ultrasonic sensors and data to automatically steer down the highway, change lanes, and adjust speed in response to traffic. Once you’ve arrived at your destination, Model S scans for a parking space and parallel parks on your command.

Sounds exciting, doesn’t it? But the thought of letting technology do all the work, with no way for the driver to know what’s coming is slightly unnerving. Fear not. Tesla has looked to provide some peace of mind with the Model S’ display having been updated to include an image of the car that gives an indication of what the car is seeing.

 

 

Autopilot in already legally deployed in several parts of the world, which puts Tesla way ahead of its competitors in this market. But a human being still must remain in the loop, ready to intervene at any moment.

Tesla is unique in treating its vehicles like software, where users test out the kinks. Every time you need to manually take control, details about what the driver did, and where they did it, are sent back to Tesla’s HQ. From here they’re sent out to the rest of the Autopilot equipped cars on the road. Essentially, this means Tesla’s cars shouldn’t make the same mistake twice  (personal favorite right here!). Chief executive Elon Musk has said his company wasn’t aware of any accidents while its thousands of vehicles were in Autopilot mode. No accidents!

More recently, Tesla pushed out another software update, with a brand new feature called Summon. This allows the car to drive by itself while empty. Perfectly suited for squeezing in and out of tight parking spaces. This mode comes with the warning that it should only be deployed on private property.


Google Chauffeur

The Google driverless car is a project by Google that involves developing technology for autonomous cars. The software installed in Google’s cars is named Google Chauffeur.

In May 2014, Google presented a new concept for their driverless car that had neither a steering wheel nor pedals, and unveiled a fully functioning prototype in December of that year. Their in-house prototype Pods are downright adorable! They began testing on roads in 2015. Google plans to make these cars available to the public in 2020 (just look at the little marshmallow!).

 

 

The project team has equipped a number of different types of cars with the self-driving equipment, including the Toyota Prius, Audi TT, and Lexus RX450h. But Google is still very much in its testing phase. As of September 2015, Google had test driven their fleet of vehicles 1,210,676 mi (1,948,394 km). Google has been road-testing in the states of California and has now expanded to Texas, where regulations do not prohibit cars without pedals and a steering wheel.

In recent news, a Google self-driving car attempted to avoid sandbags blocking its path. During the maneuver it struck a bus. Google addressed the crash, saying “In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision”. Google characterized the crash as a misunderstanding and a learning experience. The company also stated “This type of misunderstanding happens between human drivers on the road every day”. They’re not wrong, of course!


Two major challenges circle the autonomous car industry today :

  • The cost of the sensors itself is high enough to scare away most consumers
  • Legality poses a big problem – most places around the world have explicit bans on autonomous vehicles

Having said that, there’s no denying that driverless cars are a future that is fast approaching. How do you feel about this? Wait, before you answer, here’s one more piece of information.

To celebrate its 100th anniversary, BMW created an incredible concept car called the Vision Vehicle.

The visionary automobile uses “materials of the future” and “alive geometry”, and senses hazards in advance. What this means is, it is essentially a very smart, self-driving, shape-shifting vehicle. Yup, you read that right. Don’t believe me? Look for yourself.

I don’t care if this is only concept. Or might take the next century to develop. Anyone not jumping in their seats after watching this video must be dead inside.

Now, coming back to my question – how quickly would you hop in one?

 


Anisha Sawant