Where AI meets steel: robots that learn, think, and work

No items found.

“Robotics won’t have a ChatGPT moment” was the title of Coatue’s deep dive into the state of robotics back in July 2024. “A GPT-3 moment is coming to robotics” claimed Stephanie Zhan, partner at Sequoia Capital, in the announcement of Skild AI’s $150M Series A to build the LLM for robotic systems, an investment that Coatue co-led, also during July. When NVIDIA’s own Jensen Huang announced Cosmos, the company’s new platform composed of models trained specifically for robotics use cases, he affirmed that “the ChatGPT moment for general robotics is just around the corner”.

Whether you are an advocate of one or the other—which in essence only differs in how sudden the impact of robotics will be in our day-to-day operations—, it is hard to deny that AI is unlocking fundamental value in robotics that will have profound impacts over our daily routines. As Brett Adcock, founder of Figure, puts it: (humanoid) robotics are the ultimate application layer for AI. 

The intrinsic complexity of robots, their extreme dependence on software accomplishments, and the familiarity of industrial workflows with these systems, together result in the perfect breeding ground for breakthrough innovation that could be as revolutionary as the changes we have seen so far in the software side of AI.

Today, the truth is that the impact that AI-driven robotics have, both in the workplace and at home, is quite limited. We do not delegate daily tasks to a robot as we do to ChatGPT (or even Siri for that matter) apart from the Roomba. That being said, the pace of progress is accelerating, with autonomous warehouse systems, robotic arms in factories, and humanoid prototypes making tangible improvements. 

Taking into consideration the bullishness of the world’s most renowned innovators contrasted with the lack of production-ready robotics use cases today, several questions become apparent: (When) will the manual labor tasks be largely run by robotic systems? Which consumer-level use cases will arise? And equally as important for us at Kfund, which are the Southern European startups building in the space?

To better understand these, we first need to look at the bottlenecks that are present in this hardware-software cooperation to really understand where we stand, where we could potentially be, and what needs to be done for general robotics to steal the spotlight of the tech scene.

Teaching steel

Human brains are the original GPTs, all founded on a common neural architecture but tailored to each individual’s own fine-tuning: the ultimate reinforcement learning (RLHF) process. From a neurological perspective, we understand most of what goes on between our neurons, how they communicate, and what input/output systems they rely on; but in the same way that we spend our first years of existence “becoming one” with our bodies, robotic brains need a large pool of data, contextualized in a variety of scenarios, to learn and maximize the possibilities of their outputs —more in line with RL dynamics, which are actually the go-to approach at leading research labs—.

This “playing field” in which robots can mature from kids to adults has evolved to be one of the trendiest spaces with the most recent innovations. With hyperscalers and other established big tech players releasing their own pieces of training infrastructure to streamline development across a wider range of use cases (think of Cosmos from NVIDIA, PARTNR from Meta, or LeRobot from Hugging Face), startups are starting to emerge to dispute this market share.

Europe’s most prominent alternatives are already gathering interest from (potential) customers as well as investors: Genesis Robotics’s alleged $80M and Rerun’s $17M rounds are perfect examples of the investor appetite; while 1ms and Phospho’s platforms are part of the current SOTA helping builders make robotics accessible at scale. In Spain, Bopti is already developing an agent to remove troubleshooting pre and post-deployment, and Saturn’s southern European roots are helping unlock the data bottleneck with synthetic data creation, ideal for pre-training environments.

If the end goal is for robots to become as autonomous as humans are, there needs to be a system allowing them to continuously react to the environment and make the best decision when encountering any given situation. In order to achieve human-level dexterity, both software and hardware are confronted with crucial challenges to be tackled:

  • From a hardware perspective, mechanics need to be on par with biomechanics: as James Somers outlines in this The New Yorker article (please read it, it is seriously good), a human hand can move in 27 separate ways. In order for robots to achieve that degree of freedom tied to high-level precision across a wide range of movements, actuators’ sophistication need to be on par with software-driven movement optionality.  
  • From a software perspective, robots need to not only adapt to the environment and make decisions based on it, but also understand what kind of decisions can be made based on physical constraints.
    • In traditional robotics, this is an easier task: since the robot is constantly moving from a fixed point A to a fixed point B, boundary conditions are easily defined. Once point A and point B become any coordinate across an unknown map, the boundary conditions’ setup complexity increases exponentially. 

A few newcomers are emerging as the foundational models for all-purpose robotics: namely Skild AI and Physical Intelligence (with its laundry use case). Within the hyperscalers, only Google has released its Gemini For The Physical offering. The U.S. seems to be, once again, leading the way throughout this layer; although Mistral’s Les Ministraux, optimized to run on the edge, might be the leading force regarding models for robotics developed on European soil. 

A key element to tip the scales towards a winner in the space will depend on the approach followed to train the models. As mentioned above, RL is trending over teleoperation or RLHF; in any case, examples such as in-context learning (guilty of which are the Spaniards at Theker) or embodied reasoning (present in Gemini Robotics-ER models) are also being established as plausible alternatives for companies to unlock new use cases leveraging core models. 

Size (and shape) matters

Several factors still need to be considered in order to normalize general-purpose robotics among humans. Rome was not built in a day, and GPT-3.5 was not the earliest form of AI that humans communicated with. Regarding this, two topics need to be cracked for the ChatGPT moment for robotics to happen: how we will interact with these machines, and for what purpose.

For the most part, we are taking for granted that the human body is best equipped to do specific tasks that humans do, and so robots performing those tasks should have a human-like structure. Agility, Tesla, Figure and a dozen more companies developing humanoids are living examples of this.  

For other purposes, the consensus is that non-human shapes are best suited for current human-driven jobs, and so the alternative structure does not resemble a human. This is the case of security and supervisions at chemical plants, for which ANYbotics (and its Spanish counterpart Keybotic) have built a robot-dog to increase stability when going up and down the stairs, being able to carry a larger battery, and also reach difficult spots across the facility. On the contrary, Star Robotics has added wheels instead of legs to increase speed.

Hardware is an intrinsic constraint for companies in the space to deal with, but it can also lead to a more fitting business opportunity: RobCo has adapted the dimensions of the robots at larger factories to fit the specifics of smaller facilities. Others such as Shinkei or Gravis are just placing a brain inside the traditional hardware present in their niches. Sereact, NODE and the southern Europeans Theker and Cyberwave are following the software-enabled, hardware-agnostic approach. 

A common trend across Southern European startups is simplifying robots as much as possible to minimize the constraints of implementation of their software-driven hardware: following the steps of Amazon-owned Covariant, Friday Systems or Kaigos are utilizing robotic arms for several purposes within warehouses. In health, Marsi Bionics helps kids walk again by developing lightweight pediatric exoskeletons.

Given that these robots adapt to the specific task at hand, how should a general-purpose robot look? Just as UIs differ depending on the nature of the user and are becoming even more custom with AI, different robots will show different shapes and sizes to accommodate their ICP. 

Nonetheless, we believe that there will be a standardized shape for the more general use cases in the same way that a chatbot-like UI is the standardized shape for consumers to interact with AI. As of today, one would expect general-purpose robots to be humanoids; even more after Figure announced their plans of shipping 100,000 humanoid robots in the next 4 years.

We also need to figure out human-robot communication so that the systems properly carry out our desired tasks. Robotics-LLM companies will be key here, since they will allow for NLP to be embedded within the robot; moreover, speech-to-speech reasoning and control models (similarly to what Figure’s Helix is showcasing) should become the golden standard for interaction with general-purpose robots, lowering the barrier for anyone to engage with robots.

Robots building (and selling) robots

The most obvious bottleneck when comparing robotics investment opportunities that VCs are presented today against what we have been used to for the last few years, comes from what we can touch: hardware is much more difficult to produce and to scale than pure software.

This presents a two-fold challenge: on the one hand, because there needs to be large manufacturing facilities that sustain the intended volume of production. Tesla is a perfect example of a pioneer in the space, and Figure is putting lots of effort to thrive through this challenge as well. Current robot manufacturers will also play a key role in providing the pieces that unlock such AI-driven revolution; although if they avoid to do so, software providers will push to cover this part as well. The secret sauce, as previously explained, is not in the muscles but rather in the brain of the robots. 

On the other hand, there is no better place to scale the solution that AI-driven robotics facilitates than in supply chains where repeatability is the only constant. Surely, these factories have historically been largely automated already, although these efforts have been allocated towards extremely mechanical tasks. The amount of tasks that need some kind of understanding of the surrounding environment and a degree of freedom in the desired movement were traditionally human-driven situations that today can already be automated, just as their “more static” counterparts in the 60s and 70s. Manufacturing lines become the obvious target for early adoption of robotic companions at the workplace.

This GTM segment is not obvious in Europe in any case, given the lean times that the automotive and aerospatial industries are going through. Logistics, fashion or waste management, on the other hand, have traditionally been low-margin sectors where innovation was constrained to direct impacts in the P&L. Since specific use cases including traditional robotics were not able to assess some practices that AI-driven robotics will have, we believe that there is a huge opportunity replacing humans across the supply chain, directly enhancing both margins and productivity.

As models, infrastructure, integrators and hardware evolve, these solutions will allow larger amounts of degrees of freedom, catering a more extensive variety of use cases and therefore tailoring each of them to a more personalized end customer. In any case, this is not as easy as signing up on a website and generating value for a business straight away. The value-add shown from day one needs to be larger, but the customer loyalty is bound to also be increased when committing to such a solution.

When UX and GTM hypotheses are validated at a certain scale (100,000 humanoid robots in production should be more than enough for that), massive consumer adoption should follow. Given how much this physical interaction impacts every single part of our day —lots of people spend a huge amount of time online, but every single person spends 24 hours a day on Earth—, we anticipate a never-before seen virality effect, with WOW moments happening constantly across a wide variety of use cases. 

Early and dummy signs of this phenomena include Tesla and Waymo’s FSD, the streamer Kai Cenat’s new robotic friend, or drone-first pizza delivery. Although we are still far from normalizing these situations, we are slowly and subtly getting used to interacting with metal-wrapped autonomous computers much faster than we realize.

Security, security, security

Security is perhaps the most relevant topic to be solved for general-purpose robotics to hit the masses: it is easy to control what a robot does in a test environment, but leaving it out in the wild is a completely different topic. Additionally, robotics is the result of the largest variety of systems embedded within a single CPU. This mix of software layers and physical exposure makes robotics uniquely vulnerable, both to cyber breaches as well as real-world consequences.

In any case, the overall sentiment seems to underestimate the matter, as we do not hear about it too often in the news. The underlying reality, though, is that there have been increasing breaches affecting robots in the past couple of years: from Roomba listening to your conversations to the spike in attacks to medical devices’ providers, modern OT cybersecurity is a real issue that will only scale exponentially as we welcome more robots into our routines. And when saying that robots are like kids at the beginning of their training, it is not an exaggeration at all: this is how a standard early test in a robotics lab looks like

As it has also happened in adjacent sectors, regulation is driving the sector’s tailwinds, driving adoption to ensure safe deployment. This is being enforced from two angles: 

  • On the manufacturer side, the Cyber Resilience Act (CRA), which facilitates the safe deployment of these at scale across Europe.
  • On the customer side, the well-known NIS2 initiative, enforced in the industrial settings where robots are mainly present today. 
  • On May 10th, a new global robotics safety standard, ISO 25785-1, was introduced. This represents a significant milestone, as it aims to mitigate potential risks associated with the rapid expansion of autonomous hardware in the coming years.

As it is already happening in other cybersecurity-related domains, AI poses both a challenge and an opportunity, the latter being extremely difficult for established companies to adapt to. Nozomi or Claroty are not moving as fast as the potential threats demand, and newer players are covering those blank spots. The Spanish Alias Robotics has been one of the earliest evangelists at the intersection of cybersecurity and robotics, and other more generalist OT/IoT cybersecurity teams such as Steryon or Exein are also leading the physical security efforts in southern Europe.

Are we in an Ex Machina world already?

For better or worse, not yet. Although you have been seeing that we are bullish on the role of AI unlocking robotics-driven productivity at scale, today is still day 0 and our mission remains to understand how impactful this space will become.

In order for these first systems to first be properly trained and afterwards safely and correctly deployed, a new environment of companies is being unlocked. Infrastructure, cybersecurity and physical safety, DevOps, interpretability, novel UIs, etc. will verticalize the intrinsic complexity of embedding AI within a real-world system to maximize its output. Although those who can crack the end-to-end distribution challenge will acquire the larger market shares, the facilitators who open doors to those previously unable to deploy hardware solutions at scale will generate substantial returns as well. 

We are not in an Ex Machina world, yet. Nonetheless, the building blocks are being laid, and it is happening fast. The companies that nail training, deployment and trust will define the future of robotics and its intrinsic influence in the future of work and life. If you are building something that bends metal with intelligence —or enables others to— reach out to jorge@kfund.vc. We would love to learn more about it!

Twitter
Facebook

“Robotics won’t have a ChatGPT moment” was the title of Coatue’s deep dive into the state of robotics back in July 2024. “A GPT-3 moment is coming to robotics” claimed Stephanie Zhan, partner at Sequoia Capital, in the announcement of Skild AI’s $150M Series A to build the LLM for robotic systems, an investment that Coatue co-led, also during July. When NVIDIA’s own Jensen Huang announced Cosmos, the company’s new platform composed of models trained specifically for robotics use cases, he affirmed that “the ChatGPT moment for general robotics is just around the corner”.

Whether you are an advocate of one or the other—which in essence only differs in how sudden the impact of robotics will be in our day-to-day operations—, it is hard to deny that AI is unlocking fundamental value in robotics that will have profound impacts over our daily routines. As Brett Adcock, founder of Figure, puts it: (humanoid) robotics are the ultimate application layer for AI. 

The intrinsic complexity of robots, their extreme dependence on software accomplishments, and the familiarity of industrial workflows with these systems, together result in the perfect breeding ground for breakthrough innovation that could be as revolutionary as the changes we have seen so far in the software side of AI.

Today, the truth is that the impact that AI-driven robotics have, both in the workplace and at home, is quite limited. We do not delegate daily tasks to a robot as we do to ChatGPT (or even Siri for that matter) apart from the Roomba. That being said, the pace of progress is accelerating, with autonomous warehouse systems, robotic arms in factories, and humanoid prototypes making tangible improvements. 

Taking into consideration the bullishness of the world’s most renowned innovators contrasted with the lack of production-ready robotics use cases today, several questions become apparent: (When) will the manual labor tasks be largely run by robotic systems? Which consumer-level use cases will arise? And equally as important for us at Kfund, which are the Southern European startups building in the space?

To better understand these, we first need to look at the bottlenecks that are present in this hardware-software cooperation to really understand where we stand, where we could potentially be, and what needs to be done for general robotics to steal the spotlight of the tech scene.

Teaching steel

Human brains are the original GPTs, all founded on a common neural architecture but tailored to each individual’s own fine-tuning: the ultimate reinforcement learning (RLHF) process. From a neurological perspective, we understand most of what goes on between our neurons, how they communicate, and what input/output systems they rely on; but in the same way that we spend our first years of existence “becoming one” with our bodies, robotic brains need a large pool of data, contextualized in a variety of scenarios, to learn and maximize the possibilities of their outputs —more in line with RL dynamics, which are actually the go-to approach at leading research labs—.

This “playing field” in which robots can mature from kids to adults has evolved to be one of the trendiest spaces with the most recent innovations. With hyperscalers and other established big tech players releasing their own pieces of training infrastructure to streamline development across a wider range of use cases (think of Cosmos from NVIDIA, PARTNR from Meta, or LeRobot from Hugging Face), startups are starting to emerge to dispute this market share.

Europe’s most prominent alternatives are already gathering interest from (potential) customers as well as investors: Genesis Robotics’s alleged $80M and Rerun’s $17M rounds are perfect examples of the investor appetite; while 1ms and Phospho’s platforms are part of the current SOTA helping builders make robotics accessible at scale. In Spain, Bopti is already developing an agent to remove troubleshooting pre and post-deployment, and Saturn’s southern European roots are helping unlock the data bottleneck with synthetic data creation, ideal for pre-training environments.

If the end goal is for robots to become as autonomous as humans are, there needs to be a system allowing them to continuously react to the environment and make the best decision when encountering any given situation. In order to achieve human-level dexterity, both software and hardware are confronted with crucial challenges to be tackled:

  • From a hardware perspective, mechanics need to be on par with biomechanics: as James Somers outlines in this The New Yorker article (please read it, it is seriously good), a human hand can move in 27 separate ways. In order for robots to achieve that degree of freedom tied to high-level precision across a wide range of movements, actuators’ sophistication need to be on par with software-driven movement optionality.  
  • From a software perspective, robots need to not only adapt to the environment and make decisions based on it, but also understand what kind of decisions can be made based on physical constraints.
    • In traditional robotics, this is an easier task: since the robot is constantly moving from a fixed point A to a fixed point B, boundary conditions are easily defined. Once point A and point B become any coordinate across an unknown map, the boundary conditions’ setup complexity increases exponentially. 

A few newcomers are emerging as the foundational models for all-purpose robotics: namely Skild AI and Physical Intelligence (with its laundry use case). Within the hyperscalers, only Google has released its Gemini For The Physical offering. The U.S. seems to be, once again, leading the way throughout this layer; although Mistral’s Les Ministraux, optimized to run on the edge, might be the leading force regarding models for robotics developed on European soil. 

A key element to tip the scales towards a winner in the space will depend on the approach followed to train the models. As mentioned above, RL is trending over teleoperation or RLHF; in any case, examples such as in-context learning (guilty of which are the Spaniards at Theker) or embodied reasoning (present in Gemini Robotics-ER models) are also being established as plausible alternatives for companies to unlock new use cases leveraging core models. 

Size (and shape) matters

Several factors still need to be considered in order to normalize general-purpose robotics among humans. Rome was not built in a day, and GPT-3.5 was not the earliest form of AI that humans communicated with. Regarding this, two topics need to be cracked for the ChatGPT moment for robotics to happen: how we will interact with these machines, and for what purpose.

For the most part, we are taking for granted that the human body is best equipped to do specific tasks that humans do, and so robots performing those tasks should have a human-like structure. Agility, Tesla, Figure and a dozen more companies developing humanoids are living examples of this.  

For other purposes, the consensus is that non-human shapes are best suited for current human-driven jobs, and so the alternative structure does not resemble a human. This is the case of security and supervisions at chemical plants, for which ANYbotics (and its Spanish counterpart Keybotic) have built a robot-dog to increase stability when going up and down the stairs, being able to carry a larger battery, and also reach difficult spots across the facility. On the contrary, Star Robotics has added wheels instead of legs to increase speed.

Hardware is an intrinsic constraint for companies in the space to deal with, but it can also lead to a more fitting business opportunity: RobCo has adapted the dimensions of the robots at larger factories to fit the specifics of smaller facilities. Others such as Shinkei or Gravis are just placing a brain inside the traditional hardware present in their niches. Sereact, NODE and the southern Europeans Theker and Cyberwave are following the software-enabled, hardware-agnostic approach. 

A common trend across Southern European startups is simplifying robots as much as possible to minimize the constraints of implementation of their software-driven hardware: following the steps of Amazon-owned Covariant, Friday Systems or Kaigos are utilizing robotic arms for several purposes within warehouses. In health, Marsi Bionics helps kids walk again by developing lightweight pediatric exoskeletons.

Given that these robots adapt to the specific task at hand, how should a general-purpose robot look? Just as UIs differ depending on the nature of the user and are becoming even more custom with AI, different robots will show different shapes and sizes to accommodate their ICP. 

Nonetheless, we believe that there will be a standardized shape for the more general use cases in the same way that a chatbot-like UI is the standardized shape for consumers to interact with AI. As of today, one would expect general-purpose robots to be humanoids; even more after Figure announced their plans of shipping 100,000 humanoid robots in the next 4 years.

We also need to figure out human-robot communication so that the systems properly carry out our desired tasks. Robotics-LLM companies will be key here, since they will allow for NLP to be embedded within the robot; moreover, speech-to-speech reasoning and control models (similarly to what Figure’s Helix is showcasing) should become the golden standard for interaction with general-purpose robots, lowering the barrier for anyone to engage with robots.

Robots building (and selling) robots

The most obvious bottleneck when comparing robotics investment opportunities that VCs are presented today against what we have been used to for the last few years, comes from what we can touch: hardware is much more difficult to produce and to scale than pure software.

This presents a two-fold challenge: on the one hand, because there needs to be large manufacturing facilities that sustain the intended volume of production. Tesla is a perfect example of a pioneer in the space, and Figure is putting lots of effort to thrive through this challenge as well. Current robot manufacturers will also play a key role in providing the pieces that unlock such AI-driven revolution; although if they avoid to do so, software providers will push to cover this part as well. The secret sauce, as previously explained, is not in the muscles but rather in the brain of the robots. 

On the other hand, there is no better place to scale the solution that AI-driven robotics facilitates than in supply chains where repeatability is the only constant. Surely, these factories have historically been largely automated already, although these efforts have been allocated towards extremely mechanical tasks. The amount of tasks that need some kind of understanding of the surrounding environment and a degree of freedom in the desired movement were traditionally human-driven situations that today can already be automated, just as their “more static” counterparts in the 60s and 70s. Manufacturing lines become the obvious target for early adoption of robotic companions at the workplace.

This GTM segment is not obvious in Europe in any case, given the lean times that the automotive and aerospatial industries are going through. Logistics, fashion or waste management, on the other hand, have traditionally been low-margin sectors where innovation was constrained to direct impacts in the P&L. Since specific use cases including traditional robotics were not able to assess some practices that AI-driven robotics will have, we believe that there is a huge opportunity replacing humans across the supply chain, directly enhancing both margins and productivity.

As models, infrastructure, integrators and hardware evolve, these solutions will allow larger amounts of degrees of freedom, catering a more extensive variety of use cases and therefore tailoring each of them to a more personalized end customer. In any case, this is not as easy as signing up on a website and generating value for a business straight away. The value-add shown from day one needs to be larger, but the customer loyalty is bound to also be increased when committing to such a solution.

When UX and GTM hypotheses are validated at a certain scale (100,000 humanoid robots in production should be more than enough for that), massive consumer adoption should follow. Given how much this physical interaction impacts every single part of our day —lots of people spend a huge amount of time online, but every single person spends 24 hours a day on Earth—, we anticipate a never-before seen virality effect, with WOW moments happening constantly across a wide variety of use cases. 

Early and dummy signs of this phenomena include Tesla and Waymo’s FSD, the streamer Kai Cenat’s new robotic friend, or drone-first pizza delivery. Although we are still far from normalizing these situations, we are slowly and subtly getting used to interacting with metal-wrapped autonomous computers much faster than we realize.

Security, security, security

Security is perhaps the most relevant topic to be solved for general-purpose robotics to hit the masses: it is easy to control what a robot does in a test environment, but leaving it out in the wild is a completely different topic. Additionally, robotics is the result of the largest variety of systems embedded within a single CPU. This mix of software layers and physical exposure makes robotics uniquely vulnerable, both to cyber breaches as well as real-world consequences.

In any case, the overall sentiment seems to underestimate the matter, as we do not hear about it too often in the news. The underlying reality, though, is that there have been increasing breaches affecting robots in the past couple of years: from Roomba listening to your conversations to the spike in attacks to medical devices’ providers, modern OT cybersecurity is a real issue that will only scale exponentially as we welcome more robots into our routines. And when saying that robots are like kids at the beginning of their training, it is not an exaggeration at all: this is how a standard early test in a robotics lab looks like

As it has also happened in adjacent sectors, regulation is driving the sector’s tailwinds, driving adoption to ensure safe deployment. This is being enforced from two angles: 

  • On the manufacturer side, the Cyber Resilience Act (CRA), which facilitates the safe deployment of these at scale across Europe.
  • On the customer side, the well-known NIS2 initiative, enforced in the industrial settings where robots are mainly present today. 
  • On May 10th, a new global robotics safety standard, ISO 25785-1, was introduced. This represents a significant milestone, as it aims to mitigate potential risks associated with the rapid expansion of autonomous hardware in the coming years.

As it is already happening in other cybersecurity-related domains, AI poses both a challenge and an opportunity, the latter being extremely difficult for established companies to adapt to. Nozomi or Claroty are not moving as fast as the potential threats demand, and newer players are covering those blank spots. The Spanish Alias Robotics has been one of the earliest evangelists at the intersection of cybersecurity and robotics, and other more generalist OT/IoT cybersecurity teams such as Steryon or Exein are also leading the physical security efforts in southern Europe.

Are we in an Ex Machina world already?

For better or worse, not yet. Although you have been seeing that we are bullish on the role of AI unlocking robotics-driven productivity at scale, today is still day 0 and our mission remains to understand how impactful this space will become.

In order for these first systems to first be properly trained and afterwards safely and correctly deployed, a new environment of companies is being unlocked. Infrastructure, cybersecurity and physical safety, DevOps, interpretability, novel UIs, etc. will verticalize the intrinsic complexity of embedding AI within a real-world system to maximize its output. Although those who can crack the end-to-end distribution challenge will acquire the larger market shares, the facilitators who open doors to those previously unable to deploy hardware solutions at scale will generate substantial returns as well. 

We are not in an Ex Machina world, yet. Nonetheless, the building blocks are being laid, and it is happening fast. The companies that nail training, deployment and trust will define the future of robotics and its intrinsic influence in the future of work and life. If you are building something that bends metal with intelligence —or enables others to— reach out to jorge@kfund.vc. We would love to learn more about it!