A Methodology for Fast, Scalable IoT Software Development

In the 11th installment of our video series, “Change What's Possible,” Corellium’s SVP of Partnerships Bill Neifert chats with Arm’s Director of Marketing Eric Sondhi about IoT software development flows. Watch the video to learn how a scalable, cloud-based methodology sped up DevOps tasks and significantly reduced costs compared to running device farms — and how integrations with cloud services such as GitHub Actions and AWS GreenGrass improved how IoT software was developed.

(00:00)

Bill Neifert: OK, excellent. So thanks everyone for joining us today. My name is Bill Neifert, I manage partnerships here at Corellium. With me today I have Eric Sondhi, who is a senior manager over at Arm, so I'll be handing off control to him about halfway through all of this as well. So we're here today to talk about accelerating IoT device software with Corellium and here's the rough agenda of what we go through. I'll talk a bit about the state of IoT software development, how things are being done today, talk a bit about the Corellium technology and basically how we're enabling DevSecOps in here for IoT. And then I'll hand over to Arm and Eric will talk about Arm Virtual Hardware, which is how a lot of our IoT solutions are coming to market. Eric's going to talk a lot about some great use cases and give some cool solutions and demos in here and then we'll lay out next steps and answer questions.


(00:59)

Hopefully we'll have some good time for Q&A on here as well. We're targeting running this in about 45 minutes to give you 15 minutes or so for Q&A when things come through. If you do have any questions as you're coming through, please feel free to raise them in the chat as they come up. We may answer them as we're running through or may push 'em off to the end depending on how it fits in context. So with that set, let's go on to the next slide. So IoT devices is interesting. First off, I mean the nice thing is as most of the IoT devices in the market today are powered by Arm processors, so Arm has done a fantastic job of getting this out and getting it pretty ubiquitous based on which things can be run. But it does mean that the software here can't really be tested and run on a laptop.


(01:51)

You actually need to run on the device itself in order to do this, which means that if you're doing this development, you now need to, number one, get the device, but now you need to get a device lab set up on here. Typically a number of these devices cobbled together so you can do a bunch of testing regressions and such like that. The problem with this though is devices are tough to maintain. They aren't designed to be set up like this. PCs, you can stick 'em in a rack and manage 'em all easily. This way physical devices are tougher to maintain because they're quite honestly not used to being or not designed to be done this way. So it's easy to get them lost in shipment or, hell, just get them lost on your desktop or get them bricked, nothing like applying an over-the-air update and all of a sudden the device doesn't work anymore.


(02:42)

Now you're out, whatever you spent on that device. Not to mention a lot of these things are meant to be run on batteries. You got them plugged in all the time, now you have safety concerns if you want to ship them for remote sites, these things can get lost or stolen or get held up in customs or break on the way. We've had this happen with our own stuff in here. It's also a black box—you can't really see inside of it to see what's going on and really probably had to wait for this hardware to be around in order to start developing your software, meaning you're extending out the lifecycle and how long it takes you to get this market and what happens when you need more. What happens when you find out that you don't have enough throughput in all of this? You've got to go buy more devices in order to do this and get it all set up tied in with this. If you look at how development is evolving, it is going slowly from being done like…embedded devices have always been developed sitting there on a desktop with a developer merging out to device farms. So let's talk about moving this into a more modern development paradigm. Let's modernize IoT by bringing it to the cloud.


(04:01)

You can scale virtual devices much more easily. You can take a virtual device and apply cloud-based methodologies on here. I mean the nice thing about the cloud is people are there before us. They've been developing using great software practices and leveraging great tools for quite some time. And so you can tie directly into all of this and now you can use APIs to scale how your device runs. You can use APIs to debug and interface with the devices and easily scale them up and down in here. You have complete control over how many devices you've got and if you suddenly decide that you need a lot more devices, well it's just a matter of spending a few more API calls so you can get them up and running. So you can go from zero devices when you're on the weekend or something like that, to hundreds or thousands or even more of these devices depending upon how many you want to spin up.


(04:57)

You can also do this worldwide with a click of a button so you no longer need to worry about shipment delays or making sure that everyone is on the exact same version all the time here. Additionally, you can shots around all of this too. Once you find a bug, you can basically hand it off to the development team and say, Hey, find the bug at this point. Virtual devices also enable you to shift left. I mean that's a popular term in here to start development earlier, but with a virtual device, you don't need to wait for hardware to start development. You can do this well in advance of having actual Silicon, so by the time the actual device shows up, you can get this thing out to market as fast as possible. But a lot of people don't talk about the other aspect in this.


(05:44)

Let's just shift, right? Once this device is out running in the field, you still now have the ability to scale up and down a virtual test environment on here to apply things like your over-the-air updates to check every single version that you've ever shipped on here, running every single version of software since who knows how people are going to be handling these upgrades. If you upgrade from version 1.7 to version 2.5, how do you test against that? So now you get to test all of the possible matrixes of your possibilities and validate this long before you get these things bricked out in the field? So you can start development before Silicon and keep using the same platform well after Silicon. Of course, this isn't new here. People have been using virtual prototypes for a long time to solve a bunch of problems in here. It's only recently we've been looking to apply them to IoT, but virtual prototypes have a bunch of traditional barriers that has prevented them from being used in IoT.


(06:47)

And the first is that they just take a long time to develop. By the time you get around to getting these things developed, it's oftentimes too late to really add value to all of this. In addition, they run slow. Modern virtual prototypes typically run at best tens, sometimes hundreds of megahertz, but typically you're talking much slower. So you're looking at something that, especially with today's complex IoT workloads, can take tens of minutes to even get to a boot prompt. That's when development starts on these things and you've already spent most of that just getting it up and booted. In addition, you're not always necessarily running the same code on traditional virtual prototypes. Sometimes you have to simplify your code to get around things that aren't implemented inside of here or your virtual prototype was done in such a way to simplify the peripheral interfaces. QEMU is famous for this.


(07:47)

It uses block devices for its peripherals. It means it runs a lot faster and it's a lot easier to develop and it has OS support built inside of it, but now you're not running the same binary pathways when you simulate this as you will be with a physical device. This of course has development concerns and safety concerns that go along with it as you go and look to try and certify that. In addition, they're expensive. Traditional virtual prototypes have a license fees and the tens or sometimes hundreds of thousands of dollars for just a single copy. What happens when you want to scale this? Now you got to go back and buy more in here. So there are a number of different barriers that are being put up that keeps this from being used well, especially in the IoT work case. And as you may imagine, we'll address these later on, but let's pause for a second to talk a bit about why Corellium in here.


(08:42)

People have been doing this kind of stuff for a long time. Corellium is kind of a newcomer in the virtual prototype business, but actually Corellium has been around for a number of years now, not so much focused on virtual prototypes. Corellium background is in mobile phone virtualization where we've existed for around five years now, focused on virtualizing mobile devices, both Apple and Android devices for security research. Our devices run at near real time and have the ability to run unmodified binaries for both the Apple and Android devices, and we support all the fun features and sensors you'd expect inside of here. Things like Bluetooth camera, GPS, et cetera in here to give you a richer experience and to analyze attack surfaces in here and make sure that these things are as secure as possible. And the mobile devices that Corellium makes are focused on vulnerability research, security, DevSecOps and things like that.


(09:48)

And we have a very rich and robust business in that space. If you go and look for virtual iPhones, we're the only game in town and are well-respected in the industry in here. Taking a look under the covers in here, we have more than just a mobile phone virtualization engine, however. This is a virtualization engine which can virtualize the behavior of any Arm device, we support all of the Arm architectures, A, R, M, X,, Neoverse, whatever new marketing term they come out with on here. We have the ability to handle all of this in versions from 7 through 9, 32 and 64 bits. We map the processors one-to-one to the underlying processor. So for every processor in your physical device, we map this to a server processor. This means we get some really good performance, especially as the design scales in size because we're mapping it down to the physical processors and taking advantage of the speed of the underlying server, it's going to execute the exact same binaries as your real hardware and with the same pathways, we model the peripherals to correlate and make sure that you're modeling that exact same behavior in here to make sure that you're exercising this in the same way.


(11:08)

And since we architect this in the way we do, it actually executes really fast, typically at or near real-time speeds. And in fact, a lot of the devices that you're going to see here are running faster than the real devices that they're virtualizing.


(11:26)

About a year or so ago, we announced our partnership with Arm for Arm Virtual Hardware. Eric's going to talk with you about this quite a bit coming out of here, but we've partnered to basically develop certain third party boards here. Here, I think here we have a few boards from NXP and one from ST as well, and of course the Raspberry Pi. And we have virtual models of all of these and we're constantly working with Arm and other partners to roll out more and more boards for this. And of course, like I said, Eric's going to go more into this. Let's talk a bit about some of the features Arm, excuse me, of Corellium’s IoT devices. This is a screenshot of what comes up when you're running one of our devices. In this case it's an i.MX93 from NXP, and this is the standard display that you'll see.


(12:16)

We'll have the console window and then the graphics display window over the side. If there are LEDs or push buttons, we make those available to you too. There's a rich set of features on the left hand side of this though as well. And let's dive a bit more into here because we're virtualizing not just the core behavior but also all of the ways in and out of this device. And so there is a way to connect to this device. It is running in the cloud, although we have the ability to run on-prem as well that I'll discuss and you can connect to this using SSH or VPN so that it can look like a device directly on your network. We have a set of APIs called CoreModel APIs that you can use to attach generally the peripheral interfaces on the side of here. So you can now generate transactional level interfaces and inside of this to tie this into your environment and make it look like a device in your environment.


(13:08)

And of course we generate a unique IP address so you can attach remote consoles, GDV, et cetera for this and basically interact with it directly. We have the ability to trace the execution of any thread or process running on here. We have a rich set of settings to let you change things like the boot options and the name, the RAM size, which Kernel gets loaded on here because we want you to be able to modify the Kernel and put that in here or which device tree so you can scale up and down the devices that you're accessing. We also give you access to virtual Memory-mapped I/O, so you can extend these peripherals and modify the change here. Storage partitions. You can view any of the system consoles in here. In this case, we've got a core complex of A processors as well as some M processors.


(13:52)

So you might want to view either one of those consoles as well as a rich set of sensors. We're going to let you take the webcam from your device and feed it into the IoT device and we'll show that in a demo later. Same with a microphone. We'll also let you manipulate any of the other sensors that may exist on here. The i.MX93, for example, has the gyroscope, temperature, accelerometers, and we give you the ability to manipulate those directly here in the UI or we give you API access to modify these as well. And finally, the ability to create snapshots for all of this. As I mentioned earlier, it's nice to interact with other members of your team by creating a snapshot, handing it off to them and say, I found a bug here. Here's the system running. Run from this point and see the bug. So it's a very great mechanism to enable sharing among your team members in isolation of bugs.


(14:53)

In addition, if you want to develop your own devices, we have what we call a CHARM developer kit, which is basically a NVIDIA box on which we've loaded all of our software. You have the ability to take this software. Our developers use these same boxes to develop your own devices, starting with a rich set of examples and existing devices that we provide here and using the same API calls and model library that our own team uses here to let you bring up new devices from scratch, extend devices that exist or modify 'em to behave however you need them to behave to solve your tasks. Our devices can be deployed in a number of different ways as well. AWS is our partner for doing things in the cloud. AVH runs natively on AWS. Our own devices run natively on AWS, and so we've got a great integration with them.


(15:47)

We do run on bare metal AWS, so we are in control of this. So you won't currently find us in the Amazon marketplace, so you need to establish a relationship with us. We're actually working with AWS on removing that barrier though soon. So soon we'll be a part of the AWS marketplace and have the ability to integrate directly into AWS accounts. If you want to run things onsite, we actually resell Ampere Altra boxes that can plug directly into your own computer infrastructure and run directly on your network. We also have the ability to run virtual private cloud instances for AWS. So if you do want to run in the cloud but want to run in your own cloud here, we can provide instances that will run on your own cloud in AWS. And finally, the desktop appliance. The same box that we use for the developer kit can be used as a standalone, typically single-user application desktop appliance.


(16:51):

I got one more slide here I'd like to talk about on use cases before we wrap things up and then hand off to Arm, but let's talk about how this all fits into a DevOps workflow. And so the whole thing of course starts with virtual devices and I've got a few sample devices here. We've done far more than this, but we've got a virtual reality headset, a mesh router, and then a mobile phone device. And a mobile phone device is actually really interesting because a lot of IoT devices don't just run by themselves. They couple with an app that runs on your mobile phone. And of course, as I mentioned earlier, we virtualize mobile phones. So we have the ability to spin up a device and its companion app running in a mobile phone. Once we do this, of course you can deploy this in any number that you need.


(17:41)

It's cloud instances. You can scale up and down, you can easily get more users on boarded onto here. We have the ability to establish projects and groups inside of your organization and share virtual networks, et cetera in here or form virtual virtual teams inside of here. Of course, as I mentioned before, there's powerful tooling that exists on the network to do testing of things on the web. You can leverage these directly in or use your own tooling inside of here to isolate bugs inside of here. You can easily collaborate with the members on your team using snapshots and shared devices in here. All of this of course generates quicker feedback as a way to solve bugs faster, get them patched faster, and then go back into the loop. Of course, the whole thing about CI?CD is the ability to deploy at any point in time as well.


(18:37)

And having established a nice type loop like this, it gives you the ability to more quickly spin this around and do faster, more secure releases. So if we look at the barriers that I raised earlier on here on this, we address these instead of taking a long time to develop, we've got a rich library of models available today. You can extend and modify these models as well using our CDK device instead of running too slow for modern workloads, we run as fast or faster than real devices and most of our devices actually boot in seconds. You may see that in the demo later on. I forget whether or not, but we run really fast. Our Raspberry Pi device runs compute workloads four times faster than the real Raspberry Pi. It is a great speed metric we like to give. Instead of requiring code modifications to run, we run the exact same binary as you run on the physical device and execute the same binary pathways because we've modeled the peripherals exactly. And instead of having a 10 to a 100K price point on here, we have a SaaS model on here that runs in the cloud. You can start using this for $1.15 an hour and we have free trials available as well, which we'll discuss at the end. So with that said, let me hand over to Eric so he can introduce Arm Virtual Hardware. Eric, take it away.


(20:03)

Eric Sondhi: All right, thanks Bill. I think you need to stop sharing such that I can switch over.


Bill Neifert: Oh, sure. I'll do that there.


Eric Sondhi: There we go. And hopefully you can soon see my screen picking up where Bill left off. How's that?


Bill Neifert: Yep.


(20:24)

Eric Sondhi: Wonderful. All right, thanks Bill for the introduction. My name's Eric Sondhi. I'm in our IoT line of business looking after the go-to market for our Virtual Hardware. Really pleased to be partnering with Corellium and having Corellium’s technology underpin our Arm Virtual Hardware devices and also very, very proud to have their service, which Bill has described a little bit as the entry point for users to get access to Arm Virtual Hardware. So I'm going to tell you a little bit about why at Arm we've embarked on this journey and have this vision of deploying Arm Virtual Hardware for the IoT. It's really going to double click on a lot of what Bill has described and then get into some of the traction that we've had with Arm Virtual Hardware over the past year and a half or so through our partnership with Corellium. And then we're going to get into some use cases where we see the most uptake in traction to give you a sense of which IoT segments are really benefiting from our Virtual Hardware and which direction of travel we're going with really using Corellium’s technology as Arm Virtual Hardware to really push the envelope for what software developers can do.


(21:49)

So with that, just to kind of bring back to what Bill was describing, this cloud native software development paradigm. This is something that's over 10 years old and where compute infrastructure has been readily available to developers and organizations to enable the development of very complex microservices, cloud native applications, continuously deployed more and more being empowered by AI and sophisticated algorithms and distributed compute and ubiquitous networking. And that has all come together to give developers really great environments that allow scalable DevOps, scalable MLOps and CI/CD where updates are continuously pushed over the product lifecycle. And especially with software defined products where AI is continuously getting trained and updated and pushed out to end devices or connectivity is an essential part, and connectivity updates, and secure patches need to continuously get deployed over the air. This paradigm is really just blossoming and really in need in the IoT space.


(23:10)

And there's a number of challenges that developers have to really take advantage of this cloud native development. And we see these challenges come from a diverse range of products that IoT and embedded developers are working on and diverse developers and environments that have real challenges to unlocking the potential of the cloud, many of which Bill has already alluded to. So we see a spectrum of developers from deeply embedded developers working with physical hardware that doesn't scale beyond that board farm or the board on their desk. And then at the other end of the spectrum, we see cloud native developers working in the cloud already today, developing microservices, developing IoT orchestration environments, developing AI and MLOps services where it's very hard to bring in embedded developers into their workflows and environments. And there's a couple of point examples here. One where you have ML that's been trained in the cloud on servers and on cloud platforms, but needs to be migrated to end devices.


(24:23)

And that push in deployment to the edge device is hard because now the service developer and AI developer now has to become an embedded engineer and work with physical devices and work off the cloud that they've done all the work to date on. So that's been a real challenge and barrier. And then another point case, again touching back to what Bill was describing earlier, where binary images are created all in the cloud already, but then need to deploy to a fleet of devices over the air and staging those over-the-air updates and testing that workflow without bricking devices is a challenge.


(25:09)

And those are just a couple of really point examples where it's very hard for IoT and embedded developers to really take advantage of all the power that modern software development practices and environments offer. So what we've decided to do for a couple of years now at Arm is have a mission to revolutionize IoT software development, reduce the barriers of entry, several bullets that Bill alluded to earlier, and to get the large market of software developers across the IoT, more access to Arm development platforms and remove the dependencies on Silicon, remove some of the dependencies on heavyweight tools and models and licenses and upfront dollars that prevents their ability to shift their software development left and really try and unlock this faster time-to-market, accelerate the application development for both embedded developers and IoT developers that are working in the cloud and to offer a technology that gives scalable performance and not just performance of execution speed, which is very important and one of the main reasons why Corellium’s technology has been so valuable to the Arm Virtual Hardware portfolio, but also scalable enterprise performance where you can scale the amount of tests up and down to match your product development cycle, your release cycles, and you don't have to purchase heavyweight licenses for heavyweight on-prem software all upfront to allocate resources for your peak test.


(27:00)

Instead, you can have a consumption model that you can scale your DevOps and your MLOps to match the needs of your developers and your teams.


(27:11)

What exactly is Arm Virtual Hardware? Well, it's a virtual fully functional representation of the physical hardware, much as Bill described, right? We provide the whole programmer's view for the software developer at the level of the application they're trying to write. So that is typically the programmer view for all of the SOC interfaces, the CPU, et cetera to help bring up drivers and low level algorithms all the way up to the application level. We've made these components cloud native, they run and scale easily in the cloud. This is by virtue of the AWS instances that Corellium technology runs on. And this API serverless access for the end developer to quickly easily instantiate devices and manage devices in the cloud. And we have a technology here that's suitable for all IoT workloads from smaller MCUs all the way through to Rich IoT endpoints that are running AI, running operating systems, running connectivity software stacks, and we remove the dependency on RTL or silicon availability.


(28:29)

So those are kind of the mission statements for the Arm Virtual Hardware technology. And with that, we are really proud to make these board level Arm Virtual Hardware instances powered by Corellium. And so we have a range of devices we introduced about a year and a half ago with the Arm Virtual Hardware portfolio and that range from an STM32U5 Discovery Kit on the microcontroller side, but also the single board compute scale devices like Raspberry Pi and also added NXP’s i.MX line of products—two boards from them. We recently earlier this year added the i.MX 93, which has an Arm NPU in it modeled as well. And finally, we are adding Arm reference platforms that represent our Corestone base platforms such as this Smart Vision Configuration kit based on the Corestone 1000 that we've enabled using Corellium technology. So we're doing these board level devices, some from third party providers, but also Arms reference platforms that run these rich IoT workloads all through Corellium-powered AVH. And so I think the best way to understand this technology is to see it in action. So with that, I have a nice demo that my colleague Purina Verma put together when we launched the Corellium-based AVH last year that we're going to play. That's going to briefly show you how the AVH works and how it can be used in CI/CD.


(30:20)

Purina Verma: Arm Virtual Hardware is a cloud-based offering that enables software development without the need of physical hardware, thus reducing your product design cycles. In this demo, I will show you how you can leverage Arm Virtual Hardware to easily create instances of virtual IoT boards all within your browser, run applications on them, thus simplifying and accelerating your software development. Once you're logged into the website with your Arm account, select create device, you will be presented with a series of options starting with the project to which you can add your virtual IoT board to. Let's select the default project and proceed. The virtual IoT boards can be launched either with stock images of an OS or custom OS or firmware that you have built specifically for the board. For this demo, let's select the pre-configured firmware option. On this next screen, you can now view all the available virtual boards that can be simulated.


(31:24)

We're going to select the Raspberry Pi 4. The Raspberry Pi 4 comes with two versions of the Raspberry Pi OS, the lite and desktop versions. Let's select the lite version and proceed. On this final screen you can give your virtual board a name and also configure advanced boot options like the amount of RAM you'd like to allocate to it. Now all you need to do is select the create device button and this action launches your virtual board in the cloud ready for you to use and run your applications on. Creating the device takes about a minute to complete. So let's fast forward to the one that I've already created. As you can see, our virtual Raspberry Pi 4 board is up and running in the cloud with Raspberry Pi lite OS booted on it. We can look at the boot messages on our serial console and enter our login credentials when prompted.


(32:21)

This very much feels like running a physical board only much faster. Let's try some basic Linux commands to look at the system information as well as the CPU information. With our virtual board booted up, we can run our applications on it now. No better first application to try than Hello World. There we go. Hello, from our virtual Raspberry Pi 4 All the actions demoed here via the GUI can be scripted for command line usage via the API. As a result, you can take advantage of your virtual board for testing your applications with continuous integration and continuous deployment workflows. For example, I created a simple GitHub actions workflow using GitHub runner that triggers the run of Hello World on my virtual Raspberry Pi. For instance, anytime code is checked into my repository, here's the log generated for my last workflow run. At the end of my workflow, I've also saved my build and test output as artifacts that will be used to generate test reports.


(33:30)

Eric Sondhi: OK, so hopefully that demonstration gives you a more concrete view of how of what Arm Virtual Hardware is and how it can be used in a development workflow that incorporates CI/CD via GitHub actions. So, pretty straightforward view of what we're trying to deliver and achieve and help enable developers with. So I just wanted to briefly go through some of the traction we've had over the last year and a half since Purina rolled that demonstration out. And we rolled out a private beta where we invited a large breadth of our Arm ecosystem to participate. So as mentioned, we launched this in April of 2022. We engaged over a thousand users in a private beta that included our licensees. So silicon providers that do a lot of modeling today to try out this technology, OEMs and ODMs that make Arm-based products to try out software development and explore modernizing their workflows, cloud service providers across the spectrum where they're using Arm servers already or folks like AWS that have services that integrate Arm products into. MLOps and SaaS providers like Keo and Noda AI and Edge Impulse to try out using Arm Virtual Hardware and try out using some of their MLOps-related workflows to see if it could be useful for deploying as well as other SaaS providers like GitHub and some CI/CD providers and IDE providers, tools providers to consider integrating with this technology or building out a more complete developer solution.


(35:26):And we got a lot of comprehensive feedback and wonderful insights from over a hundred direct users. So over 10% of the user base was interviewed and we really got some rich feedback to help drive the roadmap forward. And so we moved that forward to a public beta, which was launched earlier this year. And with that we've allowed a VH access to basically anyone who signs up and gets approved with an Arm account. So there's no longer an invitation barrier or a multi-day approval cycle. You just go on, if you already have an Arm account, you'll get access to a VH. If you don't have an Arm account you can sign one up and within a couple hours get approval. And that access comes with free trial usage. We've supported hundreds of trials since May and we're continuing to get feedback and also understand our customer's use cases a little more deeply.


(36:26)

We've actually been able to roll out integration with Ecosystem partners that you'll hear more about later in the year. Most recently we've got an integration with Remote It that I'll talk to in the next slide. But most importantly, we are finding the real valuable commercial use cases and we've started onboarding our first commercial customers as of last quarter. So really, really good progress over the last couple of years. We're also building out the portfolio. So I mentioned the devices we support today. There's several more devices to come later in the year, next year. So very, very good momentum with a VH. Here's an example of what we mean by integration with our ecosystem solutions. So Remote Itis an Arm partner in the IoT space that offers really simplified network connectivity to Arm devices and other devices, both physical and virtual now. So they basically have simplified Remote access, removing the need for IT departments to configure complex private networks or specific configuration and just really simplified the ability to connect to and manage a range of devices out there over the internet.


(37:48)

And what we've done is just extended their technology, partnered with them to modify their client and host applications to detect our Virtual Hardware instances. Corellium has helped us integrate Remote It’s installation directly into the flow and with that you can now connect to any Arm Virtual Hardware device very natively and manage those devices as if they were physical devices alongside your fleet of a VH. And this is just an example of how Arm Virtual Hardware as a component can fit together with other components and build out a more complete solution. And as we build out the core technologies and integrate with other tools, that makes the solution fit for specific market segments and we've identified a couple of…you can think of hero use cases with the user base that we have and with some of the partners and integrators. And where we see the most interest in traction is in smart home devices and smart city applications specifically running Matter on Arm.


(38:55)

And that has a lot to do with the fact that Matter development team's prototype and design and test with Raspberry Pi devices as proxy devices for pretty much any device in the Matter architecture. And we've been able to take our virtual Raspberry Pi and do a lot of nice integration with Matter. And the other really prolific use case that we see is Rich IoT endpoints that are cloud native and software defined. And what we mean by that is the device is connected and runs continuously and is managed continuously in the cloud via an orchestrator and typically has not just the connectivity to the cloud but management of the software. And often that software is AI-powered or software-defined ML, but it's trained in the cloud and then deployed via updates over the air. And the software-defined camera in particular is one that we've been able to take forward with specific implementations that we'll get into a little more detail with.


(40:00)

But first, Matter. Matter Smart Home Prototyping with Arm Virtual Hardware has been enabled last year through a number of nice examples where you can crawl, walk, and run with Matter development in our Virtual Hardware. So with Crawl, a series of examples and blogs were published by one of our developer advocates in the IoT line of business, my colleague Sandeep Mistry, who's just been an excellent advocate and kind of early adopter of Matter and ported to a number of different devices and built some workflows. The first crawl step helps you just get started with Matter out of the box with some simple steps, introduces the protocol and introduces a VH and helps you just get up and running with the chip tool to do basic operations with Matter on a Raspberry Pi. He's built upon that with the follow-up blog about a year ago where he takes that crawl step, builds on it and has a Matter home automation service using Raspberry Pi Arm Virtual Hardware and some simple Python to help build an automation service to again extend your prototyping and capability just with out of the box artifacts from both Matter and Arm Virtual Hardware.


(41:18)

And then finally there's a more complete sophisticated use case of commissioning a device over BLE and then handing off to wifi, all using Arm Virtual Hardware that makes for a very, very good end-to-end test to make sure that your Matter based software can kind of go all the way through the flow. This type of work can be done with the app, the Android app instance that Bill mentioned earlier where if you wanted an end-to-end test to actually commission via app, you could extend this run case and actually do a whole workflow where you commission your device via the app software. And this prototyping use case has gotten the most traction with customers. This is where we see our first commercial deployments as well. And we found that not only is this good for prototyping, Sandeep has been working with the Matter SDK development team and creating an end-to-end test that's directly in the connected home IP GitHub repo that uses GitHub actions and instantiates Arm Virtual Hardware as an end-to-end test.


(42:30)

This is the latest PR request, which is about to get committed into the repository that builds on some early work where we were doing just continuous regression tests that will now enable every check-in effectively to get validated on a VH and allow for continuous integration via the Raspberry Pi on Virtual Hardware instances so that any developer that's making modifications within the SDK team can assure that their code always runs on the end device. So not only do we support the Raspberry Pi, but we're looking to extend this to support some of the other applications as well.


(43:15)

And not just other applications but other board level models that are in the portfolio like the i.MX platform. So AVH also enables the modern workflow for Rich IoT and devices, specifically smart camera, software-defined camera, smart vision products integrated into a cloud native software developer's workflow. So here we have a nice view of a platform, what we call an Arm sSandard SystemReady, IoT Ready Hardware platform, which Arm Virtual Hardware is compliant for and basically runs an entire stack of software Arm has put out there as a NARRO reference implementation that our colleagues at AWS have extended to make sure work with the i.MX, our Virtual Hardware and their IoT Greengrass operating environment. And that connects into the cloud, takes advantage of multiple AWS services and can give you not just the analytics and data to and from the device via dashboards but also integrates with WebUI to take stimulus, take streaming of video data apply ML to. And for this, my colleagues Jack Ogawa and Dave Walters made a really nice presentation demo running on the i.MX 8M Plus that I'm going to cut over to show you a snippet to just show in action how MLOps for these rich IoT devices can be achieved. So here Dave Walters goes through a little demonstration, so bear with me one moment.


(45:03)

Dave Walters:We can see the screen is black, it's currently running my Edge Manager client camera integration application and waiting for a video feed. So one of the really cool things about Arm virtual targets is that they allow you to connect virtual I/O that appears as if they're real I/O on the device. And so in this case, if I go into sensors, I can actually enable my webcam to appear as a camera that is connected to the virtual target. So when I click on Enable, I should see myself and there I am. You can also see that my machine learning model has drawn a bounding box around me and it's labeled me as a person. So that is good news because I am a person. So let's check and see if it can also detect vehicles because in this case I'm going to pretend that this application is a traffic monitoring camera. So I'm going to grab a car from off screen and I'm going to drive it across the screen.


(46:12)

It is not detecting the car. So what we need to do is we need to update that machine learning model. In order to do that, I can go back and revise my Greengrass deployment. So if I go back to the Greengrass console, click on Arm, virtual target deployment, click on revise, I need all of the same components, but I am going to update the machine learning model to V2 of the model. I'm going to review my final deployment and I'm going to deploy the new application. It might take a few seconds for it to receive the new machine learning model and restart all the application components. So I believe it's done and we have a black screen. And so again, I'm going to enable my camera and there I am again, it is detecting me still as a person. And now let's try my car and it does…I think it's a little confused on my hand there, but you can see that it can correctly identify and label a car through streaming video.


(47:55)

Eric Sondhi: Alright, so right there, I think you just saw a lot of the features and capabilities that Bill was describing, including I/O video streaming I/O capability to Virtual Hardware come together into a very real world use case of updating the AI on a intelligent cloud connected software-defined camera device. This device, great video available on the developer hub. It's part of the on-demand video content. I'd encourage you to take a look if you're interested, not just for the AVH portion but for the general integration of Arm-based devices into AWS along with some of the parody with hardware. Dave goes on to rerun the same update on an actual physical board that really shows and highlights the parody with the developer environment and the workflow and also the binary compatibility that Arm Virtual Hardware has with actual hardware. So we're really grateful to the folks at AWS that were early adopters of Corellium Arm Virtual Hardware and have been helping us build in integration into the services and also create these platforms, software development platforms that are going to be very useful for our mutual customers. So with that, I think you've seen quite a bit of how we are using our Virtual Hardware. And with that I'll hand back over to Bill to wrap this up for us.


(49:38)

Oh, Bill, you're on mute.


Bill Neifert: I am on mute. Hold on a second.


Eric Sondhi: Now you're back.


(49:45)

Bill Neifert: There we are. Excellent. So thanks Eric, that was awesome. Great content there. So let's wrap up with two quick foils here. As Eric and I both mentioned, you've got three trials here. If you want to access the Corellium Virtual Hardware, which is primarily our mobile devices, you can do this by going to Corellium.com and clicking on free trial. You fill out a form and we'll send out a trial approval email to set you up for this. It typically takes a few hours and here similarly, Eric already walked you through the process. For Arm, go to avh.arm.com, click on login if you've already got an Arm account and start running. If you don't have an Arm account, you can click on register, fill out the form and you'll receive a trial approval email within a few hours. So it should be pretty quick. Finally, we did present a lot of information here today and there's a lot more out there both on our own website as well as Arm website on here.


(50:51)

You can access the documentation links either at Corellium or a developer here with Arm. So feel free to do that. Additionally, of course, you can also reach out to our support links or directly to myself or Eric. We actually are friendly and we'll respond if you have any questions. So I've answered one or two questions from you as we've gone through this. I don't see any currently outstanding questions, so now is the time to ask. If you've got questions, either fire them in the chat or if you'd like you can raise your hand and we can let you speak to the rest of the amassed folks here. We're all friends here and no one, no one a lot. No one holds against you. Let's see, not seeing any more questions. Hopefully this is a reflection of the fantastic content that Eric and I presented and et cetera on here. Regardless, this video will be available for later viewing if you'd like to access it that way. Feel free to reach out to Eric and myself if you have any questions. Thank you so much for joining us and feel free to access this on the website. Thank you very much. Take care, and thanks for joining us. Thanks everybody. Bye


Eric Sondhi: Bye.

Speakers

Senior VP of Partnerships, Bill Neifert

Bill is the Senior Vice President in charge of partnerships at Corellium, which equips developers with the tools they need to advance the next generation of smart devices powered by Arm processors. Prior to joining Corellium, he was part of the Development Solutions Group at Arm where he managed the group’s marketing team. Bill joined Arm via the company’s acquisition of Carbon Design Systems where he was the Chief Technology Officer (CTO) and co-founder.

Senior Manager for Arm Virtual Hardware, Eric Sondhi 

Eric is the Senior Manager for Arm Virtual Hardware Go-To-Market in the IoT line of Business. Eric works with Arm's lead partners, SoC designers, and Software developers around the world to employ state-of-the-art simulation, modeling, and virtual prototype solutions for early SoC architecture and design, early software development, and system and software performance analysis.

Watch more