Abstraction, but for robots


Summary

The episode explores the disconnect between hardware and software in robotics and how Viam’s platform addresses it. Simone Kalmakas, VP of Engineering at Viam, explains that Viam is a software platform designed to bring modern software development toolkits to robotics. It acts as an abstraction layer and operating system, allowing developers to compose modules—from hardware drivers to business logic services—into building blocks, drastically reducing the time from idea to working solution.

A key innovation is the API layer that defines protos for hardware components (like cameras), enabling swappability and consistent interaction regardless of the specific hardware model. This composability allows for easy debugging, unit testing, and iteration, similar to practices in web development. The platform also handles fleet management, versioning, over-the-air updates, and security, abstracting away boilerplate code so developers can focus on solving real-world problems rather than integration challenges.

Simone shares illustrative projects, including a personal 20% project where she used Viam to monitor her father-in-law’s lobster trap with a GoPro, discovering that crabs, not lobsters, were the issue—highlighting the importance of gathering data from the physical environment. She also discusses industrial applications like a robotic sanding solution for Viking Yachts, which sands fiberglass boat parts, eliminating dangerous manual labor. The episode underscores Viam’s versatility, from hobbyist projects to industrial-scale deployments, and its role in accelerating robotics innovation.


Recommendations

Companies

  • Viam — A software platform for robotics that provides an abstraction layer and modular system to speed up development from prototype to production, acting like ‘AWS for robots’.
  • Gambit — A startup built on Viam creating a chef robot that uses an LLM layer for auditory interaction, helping with cooking tasks like temperature monitoring and pancake flipping.

Hardware

  • RealSense depth camera — A depth camera used in robotics projects; mentioned as swappable with an Orbbec camera via Viam’s API layer with minimal code changes.
  • Orbbec depth camera — An alternative depth camera that can be swapped with RealSense in Viam-based systems due to the platform’s hardware abstraction.
  • GoPro — Used in Simone’s lobster trap monitoring project to capture underwater images and audio, illustrating the importance of environmental data in robotics.

Tools

  • Viam server — A binary run on a robot’s main board that integrates with hardware components, enabling composability and software interaction for robotic systems.
  • Viam agent — A feature for easy continuous development and over-the-air updates to modules or Viam server, robust to various connectivity conditions like Wi-Fi or Bluetooth.

Topic Timeline

  • 00:00:19Introduction and guest background — Host Ryan Donovan introduces the episode’s theme on robots and the hardware-software disconnect in AI. Guest Simone Kalmakas, VP of Engineering at Viam, shares her journey from family tech support to Microsoft, founding a startup, and working in health tech and climate tech before joining Viam.
  • 00:02:37Viam’s abstraction layer for robotics — Simone explains Viam’s software platform for robotics, which brings a software development toolkit to robotics, enabling quick iteration from prototype to production. The platform uses a modular system where modules can be hardware drivers, services, or ML models, composable as building blocks. This abstraction layer allows users to access and combine these components, speeding up development compared to current systems.
  • 00:05:04Hardware integration and composability — Discussion on how Viam interfaces with custom hardware via a binary (Viam server) run on the robot’s main board. The composability of components lets developers see inputs/outputs in software, debug, and write unit tests. Viam supports common components and allows users to add custom drivers, sharing them via a registry to build a developer ecosystem.
  • 00:07:26API layer and inheritance for hardware — Simone describes Viam’s proto API system, where hardware modules conform to defined methods, enabling swappability. For example, swapping a depth camera (RealSense for Orbbec) requires only a two-line code change because software interacts with the API, not specific hardware. This abstraction speeds iteration and reduces boilerplate code for robotic startups.
  • 00:09:23Platform engineering for robots — Ryan compares Viam to platform engineering for robots, handling security, reliability, and other non-business logic. Simone agrees, emphasizing that Viam takes care of these under-the-hood aspects, allowing developers to focus on iterating and solving real-world problems, which often involve environmental factors like lighting rather than logic bugs.
  • 00:10:55Iteration challenges in robotics — Simone notes the higher cost of iteration in robotics due to physical components. Viam addresses this with tooling for logging, alerting, and fleet management, using fragments to manage configs across fleets. She analogizes Viam to AWS for robotics, bringing web development concepts to a field that previously lacked them.
  • 00:13:13Resource constraints and Viam’s role — Ryan asks about adapting to resource-constrained robotic environments. Simone explains Viam provides tooling for insight into component performance, avoiding ‘flying blind.’ She shares a 20% project example: using Viam with a GoPro on a lobster trap to monitor activity, which revealed crabs, not lobsters, were the issue—demonstrating the value of environmental data.
  • 00:16:05Hobbyist and office robot projects — Simone mentions other Viam projects, like a robot that grabs coffee mugs and a cake with facial detection. She highlights Viam’s flexibility from hobbyist to industrial use, with current big use cases in industrial engineering, such as robotic sanding solutions.
  • 00:17:12Updates, security, and industrial applications — Discussion on over-the-air updates via Viam agent, robustness in spotty connectivity (e.g., marine use), and security through authentication layers. Simone describes a major implementation with Viking Yachts, using robotic arms for sanding fiberglass parts—a dangerous job for humans—involving imaging, mesh creation, segmentation, and motion planning.
  • 00:21:37AI integration and marine applications — Ryan asks about generative AI integration; Simone mentions a startup, Gambit, using Viam for a chef robot with an LLM layer. She notes traditional ML (e.g., CV) is common in robotics. The conversation concludes with Viam’s versatility in resource-constrained environments like marine applications with limited battery and Starlink connectivity.

Episode Info

  • Podcast: The Stack Overflow Podcast
  • Author: The Stack Overflow Podcast
  • Category: Technology Society & Culture Business
  • Published: 2025-12-02T05:30:00Z
  • Duration: 00:24:38

References


Podcast Info


Transcript

[00:00:00] Working with voice? Try Assembly AI instead of wrestling with DIY models.

[00:00:05] You get insanely accurate transcription and diarization across 99 languages,

[00:00:09] all in one API that plugs right into your stack.

[00:00:13] Start building with $50 in free credits at assemblyai.com forward slash stackoverflow.

[00:00:19] Hello, everyone, and welcome to the Stack Overflow podcast,

[00:00:30] a place to talk all things software and technology.

[00:00:33] I am your humble host, Ryan Dodovan.

[00:00:35] And today we’re talking about robots and the disconnect between the hardware and software in AI.

[00:00:42] My guest today is Simone Kalemakis, VP of Engineering at Veeam.

[00:00:47] So welcome to the show.

[00:00:49] Hello, Simone.

[00:00:49] Thanks so much, Ryan, for having me.

[00:00:50] My pleasure.

[00:00:51] Now, I know we’ve had Elliot Horowitz, founder of the company, on before,

[00:00:55] but we haven’t had you on before.

[00:00:56] So on top of the show, we like to get to know our guests.

[00:00:59] Tell us a little bit about how you got into software and technology.

[00:01:02] I come from a family of Luddites.

[00:01:04] So we’re talking lawyers, we’re talking doctors and scientists,

[00:01:08] you know, real scientists, not computer scientists like in my field.

[00:01:11] And from an early age, I was always really the de facto tech support for my family,

[00:01:16] swapping out motherboards.

[00:01:17] I got really excited about.

[00:01:19] Technology, and then that carried into college where my job in college,

[00:01:23] the IT tech support for the other students.

[00:01:26] And I like to say that I’ve never in my whole career had a more stressed out customer than a Yale student

[00:01:33] whose term paper is on an ill-fated hard drive.

[00:01:35] I mean, just never anything like it.

[00:01:38] So that was a good start.

[00:01:39] I went out to Microsoft first as an intern, and then for full time, I was there for six years and I worked on the core

[00:01:46] relevance of the search engine for Bing.

[00:01:49] So dating myself already, but back then, when you type in a search query, I used to show times the links.

[00:01:54] And that was that things have changed, but that was really my start to data machine learning, which was really

[00:02:00] exciting for me as, as a background.

[00:02:02] After Microsoft, I went on to found my own startup called Cindy, which was matching people together based on

[00:02:07] compatibility as roommates, very similar to online dating, but different type of compatibility.

[00:02:12] We ended up selling that startup and I worked in a number of verticals, all focused on using data.

[00:02:18] So I worked in tech.

[00:02:21] I worked in health tech.

[00:02:23] I worked for six years for father and health in New York city, focusing on developing drugs and treatments for cancer

[00:02:28] patients using data.

[00:02:29] I worked in climate tech at Arcadia startup in DC.

[00:02:32] And now I’ve been at VM for a little over two years.

[00:02:35] VM is interesting premise.

[00:02:37] It’s a build as the abstraction layer for AI robots, obviously with AI and AI agents, everybody’s talking about higher

[00:02:44] abstraction layers and automation.

[00:02:47] Can you tell me about how that.

[00:02:48] Layer works at VM?

[00:02:50] Absolutely.

[00:02:51] So yeah, as you said, we are a software platform for robotics and our goal is to bring the software development toolkit that we all are used to using to develop and iterate into the world of robotics.

[00:03:03] So helping someone quickly and easily go from prototype to production and then to scale further.

[00:03:09] You know, it almost sounds like either it’s service meshes for robots or like CI CD.

[00:03:14] What’s the sort of software engineering paradigm that you’re, you’re bringing to robots?

[00:03:18] Yeah.

[00:03:18] Really an abstraction layer and operating system.

[00:03:21] I mean, the biggest value of VM is our breath.

[00:03:24] So we have a modular system where a module might consist of basically a driver for a hardware component all the way through to some code, a service that provides some sort of business logic up to a machine learning model, anything in between.

[00:03:43] So we allow a user to be able to access any of.

[00:03:48] These little bits and compose them together as building blocks.

[00:03:52] And that’s really the key to what makes developing on VM.

[00:03:56] Quick, iterable, you know, really easily get you from that first idea to a working solution in a fraction of the time, but the current system solutions do.

[00:04:05] It almost sounds like you’re going for like a note code, how composable configurable streamlined are these, these little components to plug and play.

[00:04:13] Yeah, it really depends.

[00:04:14] So first of all, I think that you can build in a note code way on.

[00:04:18] We also are trying to be quite flexible to be able to be serviceable to a wide range of use cases from individual engineer to a small startup, all the way up to production level, like industrial robots.

[00:04:31] So such a big range here is really our goal.

[00:04:34] And so to your question, you can write your own module.

[00:04:37] You can upload it to a modular registry that we have.

[00:04:39] You can quickly use someone else’s model or service that they’ve uploaded as well.

[00:04:45] You can use something that VM has built.

[00:04:47] The idea is flexible.

[00:04:48] So you could easily, you know, grab the components you need for your system, or you could even write custom logic yourself and add that as another module.

[00:04:57] And that’s the power of this module or building block system is a wide range of uses are possible.

[00:05:04] I imagine when somebody is designing their robot system, it’s a lot of very custom hardware.

[00:05:09] Having the subtraction layer on top, how does that interface with what could be any sort of hardware on the, what is it?

[00:05:15] The backend front end of the robot?

[00:05:17] What do you call it?

[00:05:17] In the physical system, exactly.

[00:05:19] Yeah.

[00:05:19] So we have a binary called VM server that you run on the main board of the machine itself.

[00:05:24] And then, as you said, you’ll have hardware components that integrate with that.

[00:05:28] And as you said, you know, one of the key components of building a robotic solution is you have these various varied solutions.

[00:05:35] They all have different components.

[00:05:36] They all are trying to do different things.

[00:05:38] And so this composability is so key to being able to work with that well, because once you have the software platform that treats them as components,

[00:05:47] composable, which is, again, such an innovation of VMs, now you can see the inputs and outputs to each resource reflected in software.

[00:05:56] And this is so helpful once you do build up a complex system, like you’re saying, you usually, you know, a lot of robotic solutions,

[00:06:03] they start with an imaging or perception layer.

[00:06:05] You can actually see the output of what your camera is seeing via the VM interface.

[00:06:11] Then you need to see, you know, now we’ve taken, let’s say, that depth image.

[00:06:15] We’ve turned it into, you know, what we can.

[00:06:17] Gather from those point clouds, we’re segmenting, we’re, you know, this is just the start of, let’s say, a big robotics, longer, more complex system.

[00:06:24] You are able to interact with each of these pieces, easily debug, write unit tests against them.

[00:06:30] You know, everything that you would rely on in the modern software toolkit, except what’s novel is it was not really pre-existing in the robotics world.

[00:06:41] So you can iterate just as fast as you can in the software world.

[00:06:44] Do you have the benefit of knowing that there’s going to be either off the shelf?

[00:06:47] Shelf pieces in robotics, hardware, or any kind of standards for interfacing with like perception or movement?

[00:06:55] We certainly try to provide support for the most common components.

[00:06:58] And then a key tenant of the system is that if you do come across a piece of hardware that is unsupported, you easily have the ability to add your own driver, add your own support for it.

[00:07:08] And then importantly, upload that to this registry where other developers can take advantage of that.

[00:07:13] So you’re really creating this developer ecosystem with a flywheel that.

[00:07:17] The more people who use it, the better the platform becomes.

[00:07:20] So if there isn’t standards or known entities, somebody is able to create that and upload it.

[00:07:25] That’s correct.

[00:07:26] Yeah.

[00:07:26] And one other aspect of the platform that I personally find very exciting as a software engineer is this notion of essentially inheritance, where we define a proto and API for, say, a camera.

[00:07:39] And any model of camera that you would buy has to, in the VM module, conform to these methods.

[00:07:47] If they conform to these, they can not implement something and specify that.

[00:07:51] But then what’s really powerful about that is when you’re writing software on top of these components, you’re just interacting with the methods via this API has specified.

[00:08:00] And you don’t have to be particular to the actual hardware model itself.

[00:08:04] And so where this is so helpful is then you can easily swap out components.

[00:08:09] You can test different things.

[00:08:10] Oh, like we did this with a robotics solution we’re building at VM recently.

[00:08:13] We swapped out a depth camera called a real sense for a depth camera called an orb packet.

[00:08:17] Perfect.

[00:08:17] And it was a two-line code change, essentially, because you’re just specifying, hey, by the way, this config is actually this hardware.

[00:08:25] And so making it that easy on the software side is so paramount to this iteration speed improvement.

[00:08:32] So for something like that, the two-line, I’m sure, is in the abstraction layer.

[00:08:37] But to get that, you have to look at hardware interrupts or you have to read, like, raw memory dumps on the robot system, anything like that?

[00:08:45] So as a user, you should not be having to do.

[00:08:47] And this is really another advantage of the VM platform is a lot of that rote code error handling, alerting.

[00:08:55] You know, there’s so many pieces that currently anyone building a robotic solution, like, say you’re a robotic startup, you’re two people, you’re trying to get funding.

[00:09:03] You have to spend so much time creating this boilerplate code to just handle how to integrate with robotics components.

[00:09:09] You should be spending your time developing the idea that you created the startup for in the first place and not like this rote boilerplate code that everybody would have to do.

[00:09:17] So really abstracting that out, taking that care of it for you is a big reason to come to VM.

[00:09:23] Yeah, I mean, I was writing something today about platform engineering, and this sort of sounds like platform engineering for robots.

[00:09:29] The things, like you said, VM handles the soup to nuts.

[00:09:34] Is it everything besides the sort of business logic of the robot?

[00:09:37] Like the security, the reliability, all that stuff?

[00:09:40] Exactly right.

[00:09:41] So the platform takes care of that for you.

[00:09:43] Security off, you know, retries, reconfigures.

[00:09:46] You have to tell the system how you want your modules to behave, but all of that should be under the hood.

[00:09:52] Exactly.

[00:09:52] And then your job as the developer now becomes the hard work of iterating, getting things to actually work in the real world.

[00:09:59] You know, I mentioned earlier, we talked about my bio.

[00:10:01] This is really my first time working a lot with robotics, with hardware.

[00:10:06] I had done a few hobbyist projects previously, but really just more pure software.

[00:10:11] And it’s been really eye-opening just how different the engineering process is to get a solution to actually work.

[00:10:17] And that’s where we really want engineers to be spending their time.

[00:10:20] Try this out.

[00:10:22] Iterate.

[00:10:22] Debug easily.

[00:10:23] Like, we’ll make it as easy for you as we can.

[00:10:25] And then you really need to spend those hours, you know, actually getting your solution to work.

[00:10:30] Because so often it’s not subtle, you know, logic bug.

[00:10:34] It’s more often like, oh, the lighting was off.

[00:10:35] And so now your cameras, images are not representative of how you trained your model.

[00:10:40] Like, it’s more of these environmental factors.

[00:10:42] And that’s where the hours should be focused, not on replicating the space.

[00:10:46] Functionality that would be agnostic to solution.

[00:10:49] I’m wondering, you know, how much of that software engineering practice you’ve tried to build into robotics and how much you’re able to.

[00:10:55] Because, like, you know, you talk about iterative processes.

[00:10:57] With robotics, there’s a physical part that has to exist.

[00:11:01] So iterating is a little bit more costly, right?

[00:11:05] Yeah, that’s right.

[00:11:05] And that’s been, as I said, eye-opening as well for me coming down to the hardware world is the type of iterations are quite different.

[00:11:12] It’s really around assessing the physical environment and really considering.

[00:11:16] There’s a whole new set of variables that could be at play here.

[00:11:20] So what are the sort of missing software practices that you’ve tried to apply?

[00:11:23] I mentioned already, you know, this API layer.

[00:11:25] I think that’s a huge innovation because that swappability, use the right robotic arm for the job, use something very small for certain tasks, use something very big and powerful for other tasks, or even maybe a given solution might need two of them for different times and different situations.

[00:11:42] Then there’s, you know, logging, alerting.

[00:11:45] We talk about this, like, imagine.

[00:11:46] Imagine you’re a 1998 Pearl developer and someone hands you AWS.

[00:11:51] Like, how much faster, you know, could you ramp up on that and go?

[00:11:55] Right now, we’re kind of in that 1998 world in the world of robotics.

[00:12:00] And what you need is AWS, right?

[00:12:02] So with VM, you have fleet management.

[00:12:04] I talked about how important it is to be able to scale your solution.

[00:12:07] So now you can look across your fleet of machines.

[00:12:10] We have a notion called fragments where you’re basically specifying.

[00:12:15] This is the config.

[00:12:16] That is common to my whole fleet of robots here.

[00:12:19] You can override that for a specific machine.

[00:12:22] Oh, this one over here happens to have this different camera or whatever.

[00:12:25] That’s like a one-line change.

[00:12:27] And so now when you’re talking about versioning and upgrading your fleet with the newest changes you’ve made, you do that once.

[00:12:35] You deploy that to your entire fleet.

[00:12:37] And you can monitor and see how it is going across your whole fleet.

[00:12:40] And again, like kind of the AWS analogy of we’re all very used to this now in the web development world.

[00:12:45] You can now build a website.

[00:12:46] And, you know, a good 10, 20 minutes of work instead of days, weeks, months.

[00:12:50] In the robotics world, these concepts were sorely lacking.

[00:12:54] In the robotics world, though, you have a much more resource-constrained environment.

[00:12:58] Has it been difficult to adapt your own mindset of engineering to this very much like more of a famine, right?

[00:13:05] You said we have a feast with the AWS and the like.

[00:13:08] Has it been difficult to adapt your mindset to the resource constraints of these robotic devices?

[00:13:13] That’s what VM really helps with is giving you that tooling that.

[00:13:16] That was really missing prior to it.

[00:13:18] And so instead of being just flying blind, you actually have insight into how these components are working.

[00:13:25] Yeah.

[00:13:25] I did a project recently that I think is really illustrative of this.

[00:13:28] One thing we try to do at VM is really encourage 20% projects.

[00:13:31] You know, the concept from Google originally of 20% of your time should be working on your own project in service of, you know, making you better at your day job.

[00:13:39] One project I did this summer that was exciting, to say the least, was my in-laws who live in Gloucester, Massachusetts.

[00:13:44] They have a beautiful house on the water.

[00:13:47] And my father-in-law was not catching any lobsters in his lobster trap, especially relative to neighbors, which is infuriating and who wouldn’t want to eat lobster.

[00:13:55] So I said, I think VM can help with this.

[00:13:57] You know, we do a lot in the marine space.

[00:13:58] So this really had me inspired.

[00:14:00] So we ended up putting a GoPro on his lobster trap, you know, sending it down in the water, having some waterproof cabling, et cetera, and capturing data off of what was happening underneath the ocean.

[00:14:12] And this is also a good example of I’ve learned in robotics that.

[00:14:16] The development cycle is so different, you know, it’s, you know, this is true to some degree in software as well, but really in hardware, like where your initial theory is can be quite off from what you end up needing to build to achieve that.

[00:14:27] So I thought I was going to be building this lobster notification system.

[00:14:31] You know, I thought the problem was, you know, lobsters were coming in and out.

[00:14:35] My father-in-law didn’t know.

[00:14:36] He checked the trap once a week.

[00:14:37] No lobsters there.

[00:14:38] Oh, no.

[00:14:39] I was going to, you know, have VM text my father-in-law, you know, lobster detected.

[00:14:43] No.

[00:14:44] It turns out as soon as we just simply integrate.

[00:14:46] With a camera to get images off of what was happening, my initial assumptions were completely wrong.

[00:14:52] Actually, what was happening is as soon as the trap hit the ocean floor, it was inundated with about 20 crabs immediately who are fighting each other for the bait.

[00:15:03] And I’ll also add that I didn’t realize that the camera was going to be capturing audio.

[00:15:07] So I now have like gigabytes of nightmare fuel footage of just like pincers.

[00:15:11] It’s horrible.

[00:15:12] It’s absolutely horrifying.

[00:15:13] I’m terrified of everybody.

[00:15:14] The point being that even just that initial.

[00:15:16] Step of adding images, essentially like adding eyes to what is actually happening was incredibly illustrative and specified a whole different set of actions and solutions than we had initially thought.

[00:15:28] Things like we had actually configured the trap was actually strung up in the wrong orientation of where the ropes were.

[00:15:35] We can immediately fix that.

[00:15:36] That was probably the wrong part of the ocean to have the trap in the first place.

[00:15:39] So we changed the location.

[00:15:40] The lobster notification system would have just, I would have built that whole thing and it would have just had true negatives.

[00:15:45] All.

[00:15:46] Day long and done absolutely nothing for anybody.

[00:15:49] And so really being able to interact and gather data from the physical environment to then build off of is such an important piece.

[00:15:56] Yeah.

[00:15:57] Lobster observability, right?

[00:15:58] Lobster observability.

[00:15:59] Exactly.

[00:16:00] In my case, unfortunately ended up being more crab observability, but that was a key learning.

[00:16:05] Well, I’m sure there’s FX engineer who would love to have that tape for some horror movie out there.

[00:16:09] You don’t need to add anything to it.

[00:16:11] It’s already horrifying enough.

[00:16:13] That’s interesting that 20%.

[00:16:14] So have you and your teams.

[00:16:16] Built other little test robots?

[00:16:18] Yes, we built a lot.

[00:16:19] We try to have a lot of robots around the VM office.

[00:16:22] We had one that would grab coffee mugs, you know, just in case people weren’t good about cleaning up their coffee mugs.

[00:16:29] I once saw a cake that had a VM robot on it, where if the right person came, it would like let them eat cake.

[00:16:36] Otherwise it wouldn’t, it was doing facial detection.

[00:16:39] So, yeah, we just try to have a lot of robots around the VM office, but you know, again, the power of the platform is really that it has so much.

[00:16:46] Flexibility where you can build these like, you know, more fun hobbyist projects or some with some function, like in this case, the lobster chop, but you can scale it all the way up to an industrial robot.

[00:16:57] I mean, some of our biggest use cases right now are at an industrial engineering level.

[00:17:01] And so I think that’s really interesting to be able to have that flexibility of everything from a hobbyist project to something you really depend on all the way up to being helpful at an industry level.

[00:17:12] One of the things I’ve talked about with like IOT developers is.

[00:17:16] Updates and maintenance.

[00:17:17] Is there any sort of updating built in?

[00:17:19] Like, do you have robots with Bluetooth or wifi connections and how difficult is that?

[00:17:24] We are building support for over the air updates.

[00:17:27] The, you know, ideally this should be robust to a wide variety of conditions.

[00:17:32] So whether you’re on wifi, whether you’re on Bluetooth, you know, we work a lot with marine use cases where your internet could be quite spotty.

[00:17:40] You might have no connectivity at all.

[00:17:42] We need to be quite robust to all of these solutions.

[00:17:45] We have.

[00:17:46] A feature called VM agent, which allows for very easy continuous development and being able to push out new updates to a module or even a VM server quite seamlessly.

[00:17:56] And again, the goal with the user not having to worry about any of this and abstracting the hard parts away from the user.

[00:18:03] So jokingly talk to other AI and robotics companies, my sort of go-to nightmare scenarios to terminator.

[00:18:10] But I think for this one, there’s an old cheesy Tom Selleck movie called runaway where tiny household robots go on.

[00:18:16] Killing sprees.

[00:18:17] So with the over the air updates, how do you prevent malicious access to a robot?

[00:18:22] Right.

[00:18:22] So this is where an authentication layer is so important.

[00:18:25] So being able to make sure that the right API keys are in store, we take security extremely seriously.

[00:18:31] And so making sure that that authentication layer is built in very closely to the updates.

[00:18:37] You know, you talk about the industrial robots.

[00:18:39] What’s the biggest, most complicated VM implementation you’ve seen?

[00:18:44] The most exciting one, which I’m very.

[00:18:46] Closely involved with is with a customer called Viking yachts, who is a boat building company.

[00:18:52] So they’re building these large sport fishing yachts.

[00:18:55] And one problem they have is a lot of their pieces are built out of fiberglass.

[00:19:00] Fiberglass goes into an injection molding process when it comes out of the mold.

[00:19:05] It’s very different from the CAD diagram that you had to go in due to the thermodynamic changes that happened during the injection molding process.

[00:19:13] So a big step that you have to do.

[00:19:15] Then.

[00:19:16] Is use maybe a hundred hours of manpower to just take a block sander and do the very arduous labor of sanding this thing down so that it’s smooth and homogenous, not just for aesthetics, but also, you know, this is going in the water.

[00:19:30] And fiberglass is not great to breathe in, right?

[00:19:32] Well, I mean, that’s a great point.

[00:19:33] So not only are we talking about a hard job to do physically, but it’s also dangerous for humans.

[00:19:38] It’s dangerous to breathe in.

[00:19:40] It’s dangerous to get in your pores.

[00:19:42] So now imagine these people are all in full PPE head to toe.

[00:19:45] They have to carry vacuums with them.

[00:19:48] And one other thing about fiberglass is it’s flammable.

[00:19:51] So now you have to worry about, you know, fire safety, insurance costs through the roof, like so many factors.

[00:19:57] And so this is where there’s really a big opportunity for robots to help.

[00:20:02] You know, robots can’t be hurt by fiberglass in those ways.

[00:20:05] And so what we’re building is a robotic sanding solution that is taking robotic arms and a block sander and finding the best way to sand the surface.

[00:20:15] And so then you get the safety improvements.

[00:20:18] You get these health risks.

[00:20:20] This also leads to really high turnover for the companies.

[00:20:22] So from a business perspective, it helps them as well.

[00:20:25] Really one of these opportunities where a robotic solution just makes so much sense.

[00:20:29] You know, kind of like I also like hearing about robots in hospitals because they can’t contract diseases like humans do.

[00:20:34] So anything where it’s not only hard work, but also a dirty and dangerous job.

[00:20:38] I really think that’s a great opportunity for robotic solutions.

[00:20:42] So there’s lots of interesting pieces to this puzzle.

[00:20:45] You know, first we image the piece itself and, you know, get back point clouds of where all of the points are in space.

[00:20:54] We merge these together.

[00:20:56] Then we create a mesh representing in a digital way the surface that we’ve just imaged.

[00:21:03] Then we have to plan, OK, now in this image, let’s segment it and determine where the strokes are going to go across the surface.

[00:21:11] And then motion planning, which is now that I know where I want the sander to.

[00:21:15] Contact the surface and how now I need to get a robotic arm, which has six degrees of freedom into that path safely and efficiently.

[00:21:23] So not a lot of exciting math to go into that.

[00:21:26] And then at the end of it, repeat.

[00:21:29] You’re continuing to image and make sure that you know where the robot needs to go next.

[00:21:33] So really complex end to end system with many points of failure.

[00:21:37] And that’s where the composability of VM helps the most.

[00:21:41] Can they add some kind of either generative AI or other modern new?

[00:21:45] Tangled AI to that composable system?

[00:21:48] You certainly could.

[00:21:49] We haven’t, not for this solution in particular, probably the most.

[00:21:52] Towards that is, you know, there’s a lot of interesting statistical work with gradient descent in order to find the best paths.

[00:21:59] But yeah, more to your point, there’s certainly other solutions.

[00:22:02] Like there’s a startup that’s built on VM called Gambit, which is in the kitchen space trying to build a basically chef robot.

[00:22:09] So help you as you’re cooking.

[00:22:11] Thermo sensors telling you, is this meat at this temperature?

[00:22:15] Is it ready?

[00:22:15] Like telling you when to flip the pancake?

[00:22:17] Things like that.

[00:22:18] Very cool.

[00:22:19] And for them, they’ve integrated a whole LLM layer.

[00:22:21] So it’s a completely auditory interaction layer where you’re talking to the device that’s talking to you back.

[00:22:27] I think that’s a really great use case for more of this generative AI.

[00:22:31] Yeah.

[00:22:31] The robotics stuff uses the more traditional machine learning, right?

[00:22:35] The vision is very big for the convolutional neural networks and all that.

[00:22:39] Yeah, absolutely.

[00:22:40] You see a lot of that in the vision models, exactly more traditional machine learning in the CV space.

[00:22:45] The versatility of the VM platform is so interesting where we haven’t talked as much about some of the marine applications we’ve done,

[00:22:52] but really exciting how we can make a really resource constrained environment still feasible.

[00:22:59] So imagine you have a small device with a very limited battery resource on Starlink.

[00:23:05] So like pretty spotty Wi-Fi at best.

[00:23:08] And this needs to be workable despite all of these constraints.

[00:23:12] So I find that so interesting about working in the robotics space in particular.

[00:23:15] And what VN tries to do all the way down to a project that you could use in your day-to-day life, home automation, et cetera.

[00:23:25] It is that time of the show again, where we shout out somebody who came on to Stack Overflow,

[00:23:29] dropped some knowledge, shared some curiosity and earned themselves a badge.

[00:23:33] Today, we’re shouting out the winner of a life jacket badge.

[00:23:36] Somebody who found a question that was sinking with a score of negative two or less.

[00:23:41] And they dropped an answer that got five or more and brought the question up.

[00:23:45] So.

[00:23:45] So congrats to Sergei Kalinichenko for answering KNR code for getting an int.

[00:23:52] If you’re curious about that, we’ll have the answer in the show notes.

[00:23:54] I’m Ryan Donovan.

[00:23:55] I edit the blog, host the podcast here at Stack Overflow.

[00:23:59] If you have questions, concerns, topics to cover, et cetera, please email me at podcast at stackoverflow.com.

[00:24:06] And if you want to reach out to me directly, you can find me on LinkedIn.

[00:24:09] Thanks so much.

[00:24:09] I’m Simone Kalmakas and you can find me on LinkedIn as well.

[00:24:12] And how can they learn more about VM?

[00:24:14] VM.com.

[00:24:15] Well, we’ll talk to you next time.

[00:24:16] Thank you.