The Key to Faster, More Accurate Software Development
Explore how Corellium’s virtual prototype technology operates at real hardware speeds, using the same software as actual silicon to bridge gaps in speed and accuracy— supporting both 'shift left' and 'squeeze right' development strategies to maximize efficiency.
(00:09)
Bill Neifert: Hello. We are still waiting for folks to join here. We've got a few folks in the waiting room that are just petering in. Let's give it a few more minutes for things to get started and then we will get going. Jason, you can see my screen, right?
Jason Yamada: Yeah.
Bill Neifert: OK, excellent. OK, we've got a good number of people here already. That's always a good sign. So why don't I go ahead and get started. We will have a recording of this available for anyone who wants to join in later. So thanks for joining everyone. I'm Bill Neifert, I am SVP of Partnerships here at Corellium. With me today I have Jason Yamada, who is our solutions architect. I'm going to be doing the majority of the talking since talking is what I do, but Jason will be joining us for the important stuff here a little later on to dive a bit more into the technology.
(01:13)
So let's talk about what we're going to talk about. We'll have some high level stuff. What is a virtual platform? I'd like to think we're all agreed on this, but it's interesting if you go and Google virtual platform, you get all kinds of interesting stuff out there that isn't what we would normally call a virtual platform. So we'll do that. We'll talk about the value of a virtual platform. I've got to get the marketing stuff included in this, so we'll talk a bit about the value. We'll talk a bit about our technology. Our approach to virtual platforms is different than what's traditionally been done in this space. We'll migrate a bit then to talk about cloud-native development and how virtual platforms fit into this. Tying into this, Jason will take over and do some talk around GitHub actions integration and give us a bit of a demonstration.
(02:03)
I will then retake control, talk a bit more about developing using virtual platforms, and I'll show you a bit of the Arm RD-1AE, which is our newest virtual platform working together with Arm and it's in the auto space. And then we'll have some time for Q&A. If there are any really important questions that come up during this, I will interject and try and answer them as they're going if I notice them. Otherwise we'll do that for the end. So feel free to inject any questions that you may have in the Q&A panel at the bottom of your screen. So with no further ado, let's get started. So what is a virtual platform? And a lot of it depends upon what it is you want to do. It can represent a subsystem or a system or even a group of systems.
(02:53)
And you'll see we've got some examples of these on the left here. And all these have been done as virtual platforms at varying points in time, and you can even have a virtual platform that represents a bunch of other virtual platforms or a bunch of other systems. So one man's system is another man's subsystem and that seems to be the case in virtual platforms as well. They can typically be at differing levels of a distraction or completeness. I certainly in my past have worked a lot on accurate virtual platforms or even approximately timed ones. Here today we're going to focus entirely on virtual platforms to be used for software development. That seems to be more and more the dominant use case here and certainly the technology that we'll be talking about is coming at this from a functional perspective for software. One thing to keep in mind though is you don't actually need to have a physical device to be virtualizing in here.
(03:52)
You can virtualize a device well before it actually exists or may not even exist if you wanted to have a completely virtual representation. But developing a virtual platform before the actual product exists is where a lot of the traditional value has come from. This is a slide that I stole from Arm’s auto announcement from earlier this year talking about the value of virtual platforms. If you look at a traditional development approach, which is typically used especially in the auto space, you'll see that it goes from IP available, designing the hardware, then you do software development, then you do system integration, and finally you're able to ship a product. And a virtual platform lets you start that software development basically as soon as you understand what your IP is in here. So you don't need to wait for the hardware to be done in order to do this.
(04:44)
You can start your software development early, which means you can start system integration early, which means you can ship the product early. And in a lot of spaces this can be years earlier than would otherwise be possible, which obviously has fantastic time-to-market advantages. If you look at the problems that are coming up in this space, though, virtual platforms can be used not just before silicon but also after silicon is available. And if you look at this, especially with the latest security vulnerabilities and such like this, you don't stop development on something just because the device ships. You need to keep creating updates and upgrades to this for years. In fact, the latest EU and UN regulations on cybersecurity state that you need to continue shipping software updates for 15 years after you stop shipping the device. So obviously having a way to continually develop software throughout that lifetime is of course an extremely important to-do.
(05:50)
And if you've got a well-functioning virtual prototype or a virtual platform, then you can do this. In addition, if you look especially in the auto space, software upgrades and updates have been viewed as a fantastic revenue source on here. Most of the auto OEMs are touting that in 10 years they'll be getting a major portion of their revenue through software-defined vehicle functions. And of course, having a virtual way to do this development means that your development costs can be substantially cheaper than maintaining a fleet of cars in order to do your testing. So that's a lot of the value on this, but let's talk about virtual platforms and how they've evolved. I'm lucky or unlucky enough as the case may be to have been around for most of this evolution in here as it goes. But if you look on this, virtual platforms started as entirely proprietary environments coded using internal methodologies and such like this primarily done for mobile phones when it first started coming out. But I remember 20-25 years ago, seeing virtual phones coming up and they would have the whole display and stuff on there and doing that, and it was a great way to get up and running and test out every element of the design.
(07:13)
And I'm sure these were working long before I got involved with them, but let's just stick to the past 25 years here. As they started to get more value, standards started to emerge in here and SystemC was born of this and SystemC was great because it gave me a nice standardized representation in order to do this and basically a common language to do all of this. And we quickly discovered that it wasn't just the language itself that needed standardization, it was the interface. So TLM standardization came out of that, which is great because we now had standardized mechanisms to describe and deploy models, but they're slow. SystemC is inherently single-threaded, although there have been efforts to expand on this, and it's still a simulation running on top of what's traditionally an x86 platform. And since most of the devices that you're virtually modeling are Arm-based devices, this means that you're basically running really slow now that there is a new generation of Arm servers available either plugging in on site or available in all the cloud providers.
(08:29)
There's a new generation of virtual platforms based upon device virtualization, which means you can run orders of magnitude faster than you've been able to do with traditional methodologies. As you may guess, Corellium technology is based upon device virtualization here. I've been a really poor setup slide if I hadn't been doing it for that. Let's talk a bit about Corellium’s technology in this space before we get into the value that it can offer. So Corellium has been around for about six or seven years now, and up until a couple of years ago we were focused primarily on mobile phone virtualization. In fact, even today if you go to our main website, you will find every iOS device running every version of iOS in jailbroken or non-jailbroken fashion. So you can find the iPhone 16 running iOS 18 in here, giving you the ability to spin this up.
(09:28)
We do the same thing with Android devices as well and have 7-14 supported. I think we're about to roll out 15 as well. And in addition to being able to just run the apps, we have the ability to virtualize most of the peripherals in here, including Bluetooth camera, microphone, GPS, et cetera. You can even go through and do some of the authentication mechanisms on here and virtualize some of that behavior. So these phones have been primarily used for doing vulnerability research, identifying vulnerabilities that may exist in the various operating systems, but also for security. We've got automated products here to do automated pen testing and analysis on here and then taking all of that and plugging it into a DevSecOps flow. Underlying all of this is a fantastic hypervisor technology that runs directly on powerful Arm servers. So before I get to the technology itself, let's talk about where it can run. Our virtual hardware runs in AWS.
(10:30)
If you go to our website and log in there, you're going to be going to our little home on AWS. So all the devices that we'll be showing to you today are running on AWS, but we can also deploy this on site. We have 2U servers running either Ampere Altra or Ampere One, or in some cases Nvidia Grace cores inside of here, which you can plug and run on your premises. In fact, for security research, this is the preferred way to do this. If you still want to run in the cloud but want to run in your own cloud, you can take our devices and deploy them in your own VPC. And finally, you can also deploy these into a desktop appliance, which in this case is an NVIDIA or in device and that's great if you want to have something running very fast right there on your desktop.
(11:21)
This is how you can use the stuff. But let's talk a bit about the technology itself. So as I mentioned earlier, we use device virtualization and we use a hypervisor in order to do this, but the hypervisor has basically different sub-applications here depending upon what you're doing. So if you look at the far left of the screen on here, if you're running traditional OSS in here on Cortex A processors, then you're likely running directly on the hypervisor itself. So you're running at Exception 0 or Exception Level 1 running directly in the hypervisor, and that hypervisor then runs directly on the underlying server. We don't have an operating system in the way, so what is called a Type 1 hypervisor on here, this gives us the ability to directly take advantage of the features of the underlying hardware, run at fantastic speeds and also do device virtualization much easier.
(12:19)
The world is not all Cortex A processors running at Exception Level 1 and higher going to be 1 and 0. However, we have mixed criticality in here. And so if you want to model R or M processors, especially traditional R or M processors, which are 32 bits, then we have a technology called arm-to-arm. And this basically remaps the instruction signal from the 32 bit into 64 bit native instructions which then run native inside the hypervisor. And so we do this for Cortex R cores running 32 bits at Exception Levels 2 to 0. And for Cortex M cores, the nice thing about this is since you have the ability to run an Exception Level 0, we can actually run hypervisors now running in the Cortex R cores using arm-to-arm. If you want to get even more advanced though and start running 64 bit hypervisors in Cortex R or 64 bit hypervisors in Cortex A, then we have a technology that we call sysarm that basically remaps 64 bit R and A instructions at any Exception Level into basically running at EL1 or EL0 depending upon whether or not it's the board or the application in here.
(13:41)
So the end executable itself doesn't realize what's happening at all here, but we are basically invisibly remapping to a different Exception Level. This means that you can take unmodified hypervisor code. We've done this and demonstrated this with a QNX hypervisor, Xen hypervisor and multiple others out here run it unmodified in our systems in here using sysarm and all while running at fantastically high speeds. So we've talked about the technology, we've talked about where you can run the technology. Let's talk a bit about cloud-native development because ultimately you really want to be taking advantage of the cloud. It's where most modern software is being developed. As I've already said, you can develop on-prem if you want to and we have solutions for that. But cloud is really where a lot of the new innovations are taking place. And if you want to modernize your development, it is going to be done in the cloud.
(14:38)
And this is nice because virtual devices can leverage these cloud-based methodologies. You can use APIs to control, debug, and interface with other devices. So you can get this up running in a nice deployable manner on here. Putting it in the cloud means you can scale things quickly and efficiently. You can easily go from running one to a 100 instances just using various APIs in here. And the nice thing is if you've got well-behaved cloud devices, you now have complete control over costs here as well. You're not being constrained by licensing costs, et cetera, from here. If you are not running it, it's not incurring costs. When you are running it, your costs scale with how many of these are running. The cloud is pretty ubiquitous. You can take the same virtual device and deploy it worldwide, which means you're keeping pace with worldwide development teams on here.
(15:32)
They can all be running the exact same image at the exact same time and you can deploy new images out to them on here without needing to worry about shipping changes or worrying about what platform they may be on or they may have a slight OS dependency, et cetera. From there, running in the cloud you can easily standardize on these sorts of things and of course you can shift left doing this, but you can also shift right in here as we discussed and do things like OTAs and manage things from there. So you can start development before silicon and run for years after devices ship. So to dramatize this a bit, if you start with a virtual device, you can deploy it simply out to everyone on here. You can control the access on how this is done and certainly we have the means to do this here at Corellium so that you can control who has access to this and give them access to this.
(16:30)
And you can then link this with powerful tooling. The cloud is where modern software development is done. So there are a number of different toolings and integrations that you can take advantage of out here. Once you've used this tooling to identify bugs, then you can deploy snapshots and things like this to other team members who may be able to QA this or fix this or anything from there. But you have the ability to create these snapshots and deploy them out, which means you get quicker feedback on here, get the fixes rigged in easier patching, and then back into the virtuous circle of simplified deployment, which is inherent in a CI/CD type flow. And Jason will be getting to that shortly, but just because you're running in the cloud doesn't mean you're cloud native. And there's a fun T-shirt on here basically poking fun at the fact that there is no cloud, it's just someone else's computer and there's way more to the cloud than just being someone else's computer.
(17:35)
If you are really taking advantage of the cloud, then you're running in a cloud-native fashion and this means that you're running self-contained. There is no dependency on licensing or licensed servers. You don't have to go through a chain of possible failures in here, standing up remote ways in order to do things, and remote license servers, et cetera. From there, if you're truly cloud-native, then you've got the ability to scale this from zero to unlimited in here, basically just depending upon the cloud resources available to you. And the cloud, for all intents and purposes, is basically unlimited at this point in time. You've got API controls on here and this is a huge ease-of-use thing and you'll see it when Jason gives his demo here in a second. You ideally have APIs controlling every aspect of this so you can fit with these flows.
(18:26)
Good controlling things like authentication and provisioning on here, how to spin a device up and down and then access things inside of it. And then of course tying this into CI/CD flows on here. There are a large number of CI/CD providers in the cloud. If you've done your work properly, you can integrate directly with all of these and take advantage of these. And of course being in the cloud means that you cannot deploy this worldwide. So you've got a standard set of platforms on here that can be used. You can take a snapshot in one area and easily hand it off to someone else and enable them to get up and be productive without saying, OK, now you need to worry about having this exact OS revision, this exact platform, et cetera. From there, it all just happens automatically for you.
(19:19)
And so if you use the cloud properly, you can do that. Then, of course, you've got SaaS pricing on here as well. Software as a service is inherent in the cloud and the more you use it, the more you pay. You don't use it, you don't pay. And a properly behaved cloud-native application will have all of these features. So let me hand it off to Jason now to talk a bit more about our cloud-native application here and fitting in with GitHub actions and he'll give you a bit of a demonstration as well. So Jason, I will continue driving the foils here. I assume you want me to go on to the next foil?
(19:58)
Jason Yamada: Now? Yes, please.
Bill Neifert: Okay.
Jason Yamada:
So you've heard a lot about the CHARM offering. Now let's talk a little bit about our integrations, SDK, and some of the examples that we've come up with as well as our API too. So a lot of our models come prepackaged with CoreModel. If not, we have the ability to deploy it as well. CoreModel gives you access to a host of functionality that you can perform through our examples. On top of that we have our Python examples, JavaScript examples, and our SDK as well where you can interact with our instances, do your provisioning, manage your teams, be able to interact with the model, stop it, start it, take snapshots, a lot of that good stuff. We also support API with Python, JavaScript and REST API had a chance to play with those and it really gives you a good understanding for how we tried to take this approach of really helping our software developers have an easier time interacting with our models. Being able to stand them up with ease and understand what kind of information they get to interact with. On top of that, we have GitHub Actions that's in place and we have integrations with Circle CI Jenkins, Travis CI, and GitLab. We'll go ahead and move on to the next slide.
(21:37)
So just for a brief demonstration here, I put together something simple. So essentially we'll make a change in the IDE. We'll go ahead and commit and push that change to GitHub, which will trigger our GitHub Actions. I'll show you the YAML file that I'm using and then what we'll do is interact and create a Raspberry Pi on our Corellium model. So I'm going to go ahead and share my screen and show you what we put together here. Alright, great. Can you see my screen? OK, so on the main screen here is the YAML file. This is inside of my IDE. I'm using Visual Studio Code, but you can utilize any IDE. Get some extensions or integrate with GitHub or any of the other CI/CD tools that you're looking to integrate. So pretty basic here, setting up a push, I'm passing in our project, token, and server to essentially be able to interact with our APIs and then I'm sending our flavor, that's what we like to call our version of our model operating system version and gave it a name as well so that way we can identify the name for future use.
(23:00)
So I've already went ahead and made a basic change to a txt file here. I'm just going to go ahead and commit and push and now is where we're going to see the magic really happen. So once I push here, we are going to see this action kick off.
(23:25)
So this is inside just GitHub, regular GitHub. We're installing our dependencies, setting up a runner, and now we're setting up our Corellium CLI to be able to interact with and create our device. So from here, the interaction that we're seeing, this is our ADH, this is where all of our virtual models are currently. I have a Raspberry Pi model that is spinning up through this GitHub Action here, and once this completes, I can go ahead and move on and interact with the rest of this device. Now, just to give a little bit of background as to what is truly possible here, because we have access to these GitHub Actions, our APIs and REST APIs and our CLI, we can essentially stand up a runner that can perform SCP SSH. So once this model stood up and once this completes, I can actually add additional functionality to my runner here to be able to interact with this model, upload a binary, run a compile or run SSH scripts or run shell scripts through and then be able to retrieve the results back to the runner and then add those as an artifact. This is just a very basic example of just standing up the model, but from what we can see here, there is so much more that we are able to do and have access to just because we can interact at this GitHub Actions level. So once the device gets created, as you can see, it's–
Bill Neifert: And this is always the slowest part, especially when you're doing a demonstration right in here.
(25:10)
Jason Yamada: Absolutely, absolutely. But in comparison to say spinning up other simulators or other devices that might require more resources or even a physical device as well, we can set this up. And the beauty of it is too, at this point we could also take snapshots and then share those snapshots with this model with everybody else, or we could write a shell script that's configuring this model for future automation, future tool set updates, different firmwares and things like that. As you can see here, our GitHub Action completed successfully. And our device is ready. And that was all triggered through our IDE and our interactions between GitHub actions and Corellium.
Bill Neifert: Excellent. Thank you so much. Jason. Let me go back to sharing on my side on here. You're going to stick around for questions as they come up as well, right, Jason?
Jason Yamada: Mhm.
(26:26)
Excellent. OK, so Jason just gave you an example of using our stuff tied in with GitHub Actions. Let's see what else is possible around our stuff on this. So let's talk about developing, using virtual platforms. So if you look at a normal deployment here on a device, you've got a variety of software and inputs into the process. If you're compiling for Linux, it's likely got a BSP. If it's AUTOSAR, it's an MCAL and here's some sort of board configuration and you've got your OS and RTOS files, you've got applications potentially coming from your silicon vendor, you've got your own applications, you may have your large language models, and these basically all come together to form your binary image. This is what you want to load on the device. And in fact, for a physical device, this is exactly what you do. You take this binary image and deploy it out to your device and then you connect this basically to your environment. If you are debugging on this, you're looking up with things like TRACE32 or Arm DS, et cetera. From here you're tying this in with XiL tooling. It's either SiL or HIL on here, typically on here. Things like CANoe or SiLKit on here and then other interface tooling oftentimes maybe use Wireshark or something like that to analyze PCAP files. And so this is, it's certainly not an exhaustive list on here, but it shows an example of basically the flow that you're traditionally using today if you’ve got silicon.
(28:09)
With Corellium, and we're actually taking this same exact binary image unmodified, this is how we like to do things here. You have to modify things in order to run this and you deploy it onto a Corellium Virtual Platform and then you use our APIs to basically interface out to the exact same devices and exact same interfaces as you would if you were running the sink in circuit. So this is something that people often ask me, so how do I target for my virtual device? And the answer is, well target like you're targeting your physical device in here and we will run the same way. We basically intentionally design our platforms in order to mimic the behavior of the actual device in here. In fact, we follow the same binary pathways inside of this as well.
(29:01)
We don't typically use VERT IO to model peripheral devices in here. We instead model them exactly. So you're running not only the same drivers as you would on here, but you're modeling the same binary pathways. Let's take a look a bit further into this and see the anatomy of this mixed critical model and its various interfaces on here. So in the middle of this we've got the Corellium model and you'll see in this case we've got the hypervisor running at the bottom. In this case we've got A, R, and M processors running hypervisors, OSs and applications inside of here. This is extensible or connectable in a variety of different ways. So we'll start on the left and work our way around as we come through this. We've got Virtual Memory Map IO on this, which basically means that we have the ability to take a region of memory and connect this to a virtually memory mapped device.
(30:03)
So you can have your own peripheral that you've coded running on another device, connected over the network and assigned to a memory region. And anytime the device accesses that memory region, it will send out communication over a network socket to your target device, which then should respond and give something back. Similarly, if you want to initiate something from a Virtual Memory Map region, you generate and interrupt and share things over the Memory interface on here. So it's a very nice easy-to-find interface to easily extend an existing device. Inside of the device itself, we have the ability to integrate existing C models and this is where we bring in things like different processor models or accelerators, DSPs, et cetera from there. So we have the ability to integrate these and run them in dom0, which is hypervisor speak for basically the control domain for us.
(31:04):
This is Linux domain running Ubuntu in here. So as long as your C model can run Ubuntu and communicate basically in our memory mapped fashion, then we can typically integrate it in very directly and leverage this. This is exactly how we have done the neural processor on several of our devices in here. We've taken the existing NPU model from Arm, plugged it inside of here and forward accelerator calls over to there. Of course you want the ability to debug this. We expose GDB interfaces for all of the various processor types, A, R and M inside of here and give GDB data along with extensions. And we've validated this with a number of partners here, including Lauterback, IAR Tasking, Slash One, IDEA, VS code, RDS, and many more. Basically as long as you've got a GD compliant debugger you can attach to and control this. As Jason was just showing.
(32:10)
There are REST APIs that can be used to control, share and manage devices, easily tie into CI/CD flows and do this for all the various language bindings. We actually provide a bunch of examples around here as well on ways to use this. The configuration information that Jason was showing on tying in with GitHub, those YAML files are available as examples in our SDK along with example files to tie in with things like Circle CI and Jenkins and other CI/CD flows. We also have example flows here to stand up a bunch of devices for if you want to give trainings or things like that since virtual environments are great for doing trainings. If you want to develop your own models, we have the ability to do this using what we call CDK or CHARM Developer Kit and you can use this to basically take the code from us and models from us and either create your own models or extend them.
(33:12)
This is the same thing that our folks are using internally. So it's not necessarily an API in the normal case, but it fits nicely on the slide. Otherwise, over on the right hand side we have the ability to interface with any of the peripherals inside of your device. So we can take over the USB ports on your device, we can redirect your webcam or take audio feeds inside of this. The benefits of running at these high speeds mean that we have the ability to keep up with actual peripheral data. Ethernet comes in this way as well. And then finally for any other peripheral interfaces that you may have in the auto world, you've got things like CAN and LIN and a variety of other things. We provide transactional interfaces here for this and what we call our CoreModel API and this, we've used this to easily tie into software in the loop tooling from folks like Vector we've got a great partnership with.
(34:08)
And if you recall back from the last webinar I gave this summer, we did a demonstration of using the CoreModel API to use a virtual canvas talking out to Vector. So you want an interest in that, either pose the question here or take a look at that webinar in our website. What this all means is that we have the ability to establish a fairly rich ecosystem of supported OSs, debuggers and platforms, et cetera. So on the left is a sampling of various OS and software payloads that we have up and running on various devices here. And you'll see it's pretty much any software that you'd want to run, especially in automotive use case. We’ve got AUTOSAR and QNX inside of here, and most of the various automotive OS flavors. We’re adding to this all the time as we roll out more boards and bring in additional software payloads. Obviously, we need the ability to tie in with debuggers, and we support pretty much all of the standard debuggers.
(35:20)
We have hardware partnerships in place with Arm, NXP, SST, and Raspberry Pi and are currently working on numerous additional ones as well. And then finally integrating into various cloud flows. I talked a bit about the fact that we run natively on AWS or have MP boxes for doing onsite, but all of these have the ability to tie in with a lot of these cloud-based flows. Even when you're running onsite, you can use the same API definitions to interface with your onsite appliance. So you can take one flow on the lead portable across your device no matter where it's modeled.
(36:02)
If you look at traditional virtual prototypes here, they can't take advantage of a lot of these things because they take too long to develop and by the time you get them they're too late to end value. They run too slowly For modern workloads, if you look at the single threaded model-based simulation that I was talking about earlier, you can take tens of minutes or even hours to boot. And if you look at the Raspberry Pi model that Jason was just showing, that Raspberry Pi model runs five times faster than the actual Raspberry Pi. The delay you saw up front was us basically allocating the space for it. But actually once you've established a device that boots extremely quickly, traditional virtual prototypes will require you to modify codes or at least use different compile flags or model the peripherals too simply so you're not testing the actual device.
(36:56)
And of course, traditional virtual prototypes don't have cloud-based pricing like what we have here. So we have a bit of time left in here. So let's talk a bit about the RD-1AE. The RD-1AE is a device that was announced earlier this year by Arm. It’s basically the next generation of automotive platform on here and this is a very complex platform. It has three different domains inside of it. On here we've got your application domain on the left, which has based on Neoverse V3s, it can have anywhere between four and—oh my god, I think you can support up to 64 of these inside of this. Our system for this has four represented. We can scale up to more if you need to on here, but you see inside of this it's running a very complex software stack. You've got Trusted Firmware for Cortex-A, U-Boot, GRUB, and then the Xen hypervisor with various Linux payloads running inside the application domain.
(38:00)
In the middle, you've got a variety of safety islands, which are based on the Cortex-R82AE. You'll see configurations with one, two, and four cores. So we've got seven Cortex-R82AE cores represented within this system. And then we have the security domains and control domains over here, which have been modeled using Cortex-M55 and Cortex-M7 cores. So you'll see it's a very complex set of IP on here, and this all combines to integrate within an eight-stage boot process. We stole this diagram from an Arm website on here, basically showing the progress as it starts running inside the secure domain and then slowly branches out to boot up the safety islands. And then finally, the application processors come in, and throughout here, all along this process, it can be fairly complex, but we've implemented the entirety of it in our device here.
(39:06)
And so let me show you the RD-1AE on here. Let me show you first Arm Virtual Hardware itself. And so when you are using Arm Virtual Hardware, you've got a variety of devices available to you inside of here. We've got generic Android if you want to run apps. We've got the i.MX 93 on the RD-1AE, so let's actually bring that up. I don't want to watch it partition over here. So let's actually walk into the one that I have already, but we will watch it power up in here because this power process is going to walk through all eight stages of this in about 30 seconds in here. So we're booting in an eight-step process here through every step of what we've just run through in a very fast amount of time here. And in fact, I think I may not have.
(40:10)
So you see the last stage of this is Linux booting up on the application processors, and it's coming up on this now. You'll see inside of this what we have represented is basically a console port into all of the various aspects of the RD-1AE, going from the secure cores—and you can see what's available there—to non-secure, which didn't display any information, the SCP, and then finally down, like I say, into the non-secure portion. And let's clear out any stuff in the buffer on here, and we can log in now on this, and you'll see it behaves just exactly like a Linux device because it is a nice, well-behaved Linux device. It's running 13 cores of processes here, running Linux right here. In this case, it's running based upon Xen in here. You can load your own OS into this; we have documentation on doing this.
(41:15)
We actually were just working with the Red Hat folks yesterday to get Cintas up and running on here as well. So this is available for you today to try out in a free trial if you would like. So that's the RD-1AE. Let us wrap things up a bit. We've talked a bit about models, I talked a bit about the CHARM Developer Kit earlier and how to get your own models on that. That's the same tools that we use internally. It's an appliance and source code for all of the models in here. And as part of this, you get direct access to our engineering. Of course, you don't necessarily want to develop models yourself or pay us to do it. So we also have a great partnership with a company called The Judge Group, a multinational services team with a lot of experience creating models for Corellium.
(42:15)
We've used them to create some of our own models. Arm has used them extensively to create models and they have also engaged with several other customers as well to create not only the models but also the flows around them. On top of this, they have a large group of scalable and trained engineers and like I said, they have the experience of integrating this into flows as well. So we have the ability not only to create models ourselves or enable you to do it, but we have the partnerships to do it as well. Finally, if you look at how we overcome the traditional virtual prototype barriers, we have a variety of models which are available today and which can be created using CDK traditionally much faster than it takes to create models using System C. In here, our devices tend to run as fast or faster than real devices.
(43:04)
As we just showed, we're booting 13 cores through an eight-stage boot process in 30 seconds. You try that on the FVP model of this, it's going to take about an hour inside of this. We're running the same exact binary as the real device on here, executing the same binary pathways as well because we've modeled the peripherals exactly. So you have to have confidence in all of this. And our stuff is based on a SaaS payment model here, no additional licensing needed on here. And our pricing starts at 50 cents per core hour on here. So it's certainly very approachable. If you want, you can get access to our devices today at app.avh.corellium.com. Trial users get a hundred free core hours on here. So you can simply go here to this URL log in, either request a trial or if you have an Arm ID, you can log in directly with that and we’d be more than happy to set you up with all of this. So that is the end of our presentation today without rehearsal, we've managed to come in exactly at the 45 minutes I was told to on this. I don't know how we did this Jason, but this does give us time for questions and answers. I see we've already got at least one question queued up for us there. If you'd like, please go ahead and type any additional questions in as I am answering them. So let me stop the share and my dog wants to chime in, lemme get rid of him.
(44:41)
Jason Yamada: So it looks like we do have one question for now, which is clock tick-for-clock tick, no jitter from the hypervisor.
Bill Neifert: I can field this one if you'd like Jason.
Jason Yamada: Yeah, yeah, it's a really interesting question.
Bill Neifert: So we actually do model time exactly inside of the system. We don't run as simulation time here, so time is invariant on here and we traditionally map our time directly into simulation time. If you're running AUTOSAR and you have a five millisecond interrupt that goes off every five milliseconds, that interrupt will go off every five milliseconds inside of our device. So it is timing determinism that comes in from this and we ensure that throughout all of our execution.
(45:34)
Jason Yamada: So all of this sounds really excellent. What are the limitations? Because there are certainly limitations.
Bill Neifert: I mean there are always limitations right on here. Actually though there aren't that many on here. What we have found is we do a fantastic job of executing Arm code because we're executing it directly on the Arm Core itself on here. Accelerators behave a little differently than what you'd think. Accelerators typically run faster. They are accelerators after all. Well for us, accelerators need to be simulated because we don't have the native representation of your accelerator. And so even in the fastest implementation here, it's still going to run slower than if you just run the algorithm on the processor itself. So it's slightly counterintuitive behavior here that an accelerator will run a bit slower. We have the ability to use accelerator models from the various vendors so they do run correctly, which just runs a bit slower here. And how much slower?
(46:42)
Depends upon how well the accelerator itself was written on here. That is the primary limitation on this though. I guess the other would be we're not timing accurate. I led with that—this is a functionally accurate device. It will boot your unmodified OS and binary image and do it well, but it will not give you accurate performance data because we are a functional simulator. You can get qualitative results here and if something takes twice as long to run on our stuff, it'll probably take longer on the actual device. I wouldn't say it's necessarily twice as long, but you can get qualitative type results there on that. So those would probably be the limitations.
Jason Yamada: If I have my own Yocto Linux BSP for IMX93 or Raspberry Pi, what's the path to flash this to the cloud device, or is there some Yocto recipe I must add?
Bill Neifert: That's a great question here. So if you've got your own BSP, number one, like I say if we're modeling a device, use the BSP for that device. So we have i.MX 93 devices for example. And so if you want to target the i.MX 93, compile with the BSP for the i.MX 93 and it will run directly on our device. Excellent timing on that Jason. And once you have done that, you can upload your own firmware exactly as Jason is showing here and it will run on there. And this is exactly how we do it here. A lot of times, by the way, it's easier to start with the source image that we provide on here and just replace aspects of this on here. The source image itself, and boy Jason, you started a long process on that is a zip file inside of here which basically has images to be loaded into the various memory devices and then a description of all of this. So a lot of times it's easy, it's just to tactically replace the image in here with your own ELF file, zip it, and then upload it using the upload. But to be clear, you use the exact same BSP as you would for the physical device.
(49:06)
Jason Yamada: Great. Can you please repeat the IF that you support. I mean what I need to do to integrate an existing C++ model into your environment?
Bill Neifert: So the interfaces that we support are basically most of the standard ones. Jason, if you can go and do a quick search for Corellium CoreModel on here and here you'll see the first page that should come up is the…isn't it fun having people watch you type as fast as you'd like to on here? Yeah. Corellium CoreModel. And the first thing that'll come up is our GitHub page on here that is a representation of the interfaces that we've worked with thus far on here. And this is not an exhaustive list of all of the interfaces that are possible inside of here. It's just these interfaces that have been done thus far on our various devices here. As we work on new devices, we add new interfaces inside of here, but you'll see the list and if you scroll back to the top on this Jason, you'll see that—back down to supported devices—you'll see we've got a lot of the common interfaces that you want to use inside of here. If you have an interface that isn't listed here. CoreModel is a very straightforward transactional API and we'd be more than happy to work with you on coming up with a CoreModel description of this as well.
Jason Yamada: And I also posted the URL to the CoreModel GitHub there in our participant chat.
Bill Neifert: Excellent, thank you Jason.
Jason Yamada: Let's see. Are you able to speak about the technology you're using to virtualize the screen of the remote device? WebRTC?
Bill Neifert: We are indeed using WebRTC under the covers to do this and so therefore you'd want to make sure that the WebRTC ports are open by your IT on this in order to connect in. We have the ability since it's WebRTC based to actually take this experience and embed it into your own experience as well. We have something called the web player for this. So basically you take the guts of the screen that Jason was just showing here and then you can embed it directly inside your own web experience. You're then responsible for your own authentication, et cetera from there. But you now have the ability to run this inside of your design sense. It's basically just web RTC, so you can take the iframe and embed it inside of your flow as well. So you see Jason is thankfully bringing up this on this. We've got client examples which can be shown on this as well. And we have a number of partner companies that are using this today primarily to do online training and security analysis.
(52:01)
Jason Yamada: Do you have the ability to visualize the screen of a microcontroller as well?
Bill Neifert: And the answer to this is yes. We were just showing this on a Raspberry Pi on here where we're virtualizing the display on this. If your device has a screen, we have the ability to represent it virtually. I think this includes an LCD screen as part of our primitives. You can redirect any frame buffers or similar outputs, or even redirect the output from a GPU onto this. So we have a variety of methods for creating a virtualized display.
Jason Yamada: Great. That covered all of the questions posted in the Q&A. Feel free to post any other questions or comments.
Bill Neifert: OK, we're not getting any more open questions, so we're going to do a last call on here and a wrap up on this. So Jason, thank you so much for joining in today. You gave a great demo and, oh, thank you. Thanks, Andrea. It’s always good to speak with you, and that covers it. Thank you, everyone, for joining us today. We’ll be posting this webinar for video streaming later on today on our website as we always do on here. So check back later on if you want this link, if you want to look at that link, or you want to share with any coworkers. Thank you so much for joining, and we look forward to working with you.
Jason Yamada: Take care. Bye.
Speakers
Senior Vice President of Partnerships at Corellium, Bill Neifert
Bill has over 30 years of experience in the technology field. Bill began their career in 1990 as part of the Advanced Engineering Program at Bull HN. In 1995, they moved to Quickturn as East Coast Technical Manager. Bill then joined C Level Design in 1999, again as East Coast Technical Manager. In 2001, they founded Carbon Design Systems, where they served as CTO. In 2015, they joined Arm as Senior Director of Marketing, then moved to Senior Director of Market Development, and finally Director of Models Technology. In 2022, he joined Corellium as Senior Vice President of Partnerships.
Solutions Architect at Corellium, Jason Yamada
With over 15 years in the software industry, Jason has cultivated a deep understanding of the software development lifecycle by immersing himself in its many facets. His experience enables him to create integrations that support and enhance DevSecOps workflows, driving efficiency and security at every stage.