[00:07] Lee Hutchinson: Hi folks, this is Lee Hutchinson, Senior Technology Editor at Ars Technica. Welcome back to the second part of this two-part special edition of the Ars Technicast. Last time we spoke with Northrop Grumman’s CTO, Scott Stapp, about the internet of things on the battlefield. Today, we’re going to take the conversation in a different direction and talk about the role of open systems in connecting together what’s referred to as the Joint Force. That’s the umbrella term used to refer to the combined and coordinated functioning of multiple service branches from the US and its international allies. To dig into this, my guest is Richard Sullivan, Vice President of Program Management at Northrop Grumman. [00:46] Lee Hutchinson: Thanks for being with us, Richard. So, this is a really interesting topic, these open and secure systems in the Joint Force and it requires a bit of unpacking to really understand what the buzzwords mean, I guess. For a big chunk of the Ars’ audience, when you say open systems, that carries some specific connotations. It usually means we’re talking about free and open-source software. And while open systems in this context that we’re talking about today can include open source applications, that’s not really the core meaning here. So, can you kind of level set us on what in this context ‘open systems’ means? [01:24] Richard Sullivan: Yeah, sure. And thanks, Lee, I really appreciate this engagement. When we talk about open systems, we’re really using that to replace the phrase ‘open mission systems’ and when we talk about the flexibility of putting different capabilities on different platforms. And so, to the point that you’re making, it is beyond the software compatibility and making sure that you have interfaces that are common, that a platform, an air vehicle, whether it’s a rotorcraft, a VTUAS - a vertical takeoff unmanned system - or an unmanned airplane has an interface that can accommodate different types of sensors or payloads. And so, it’s not only the electrical interface or the message interface, it’s also the physical interface. So, are we putting things in common physical interfaces that can be easily accommodated as well? I think an example of this is the USB connector. The USB connector is a physical interconnect to your laptop or PC that will connect a keyboard, a mouse, a hard drive, a video camera, and so on, and so forth. And it’s sort of that concept that we’re looking at. A lot of the sensors and the computing power are unique processors, and how do you make those unique processors interface with a variety of different platforms? And so, then when we talk about open systems, that’s really sort of the concept that we’re talking about. [03:01] Lee Hutchinson: Okay. So, this is like building an entire stack, like a whole vertical column of different layers - and this could range from common application-level protocols all the way to like you said physical connectors - and then additionally, the compute hardware in some cases, that all this runs on. And it almost reminds me as an old IT guy of the good old OSI model that we all had to learn. [03:23] Richard Sullivan: Yeah, in fact. We use the word API’s all the time. So, we want to develop the API, the interface between the application, and the application is a physical and software application to the vehicle. The fact that we can keep those API’s constant, it’s not for all sensors and for all things, but to the point, we can make them as common as possible has a lot of value having an open mission systems architecture. [03:52] Lee Hutchinson: So, this has got to be like a rolling wave kind of thing to implement, right? I mean, there has to be, I’m just guessing, but there has to be a tremendous amount of work done to get all of the different branches of the US Department of Defence and then potentially all of the contractors that feed into them onto a sort of a standardized, commonized set of interfaces and buses and hardware. Everything from software all the way up into the physical world. How much work is there to be done to get from where we are now to get to there? [04:24] Richard Sullivan: The thing is technology in all facets of all domains is always changing, right? So, as we look at just the standard - I’ll use the word recapitalization - the things that are on the vehicles that were installed 20 years ago, which are still relevant, maybe not as good as it could be today with today’s technology. They just get redesigned, just as a course of the technology maturation over time. [04:49] Lee Hutchinson: When you say this technology attached to vehicles, you’re talking potentially anything from like a sensor pod on an aircraft, all the way up to a radar system on a ship and everything in between, right? [05:02] Richard Sullivan: That’s correct, right. So, if something that was just a brilliant technological marvel 20 years ago is something - something like our iPhone today - is hundreds of technological marvels from 20 or 30 years ago. And all those were giant-sized, and we’ve learned how to optimize and miniaturize over time. As the services are redeveloping and recapitalizing their radar sensors, as they’re recapitalizing their EW sensors, and so on and so forth, one of the new requirements is open architecture interfaces. So, just the natural life cycle of the different systems have the ability through their modernization programs implement this. So, it is a rolling wave and it doesn’t all have to be done instantly, which is sort of a nice thing. Just as you will change out your car, like most people every, let’s say four to five years, you don’t want necessarily all the things that were in your car two generations ago. You’re going to get the new things in your car, and it’s a similar process. [06:08] Lee Hutchinson: It’s interesting to see this from just strictly from a civilian point of view, and not to make this too much of a digression, but there’s a great flight simulator that I’ve been playing, it’s called DCS, it’s super-duper popular and there is an F14 module available for DCS, it’s a fine Grumman product. And the F14 module in this game has access to a lightning pod, you can attach it to one of your wing pylons. And a lightning pod, as I’m sure you know, it’s one of those infrared targeting dealies where you can laser target and drop smart munitions on it. And even though this is a game and it’s all modeled on, it’s all technology, the introduction of that targeting pod onto that airframe clearly was something that happened afterward. And watching the way that it has been integrated into the airplane there’s a separate control panel that has to be installed in the back cedars position to have access to all the capabilities and the clearly updated readouts from that thing are projected inside the cockpit into instrumentation and stuff. That feels like an example of some modernization that happened in the ‘70s, and in the ‘80s, to bring older weapon systems up to date. And it feels like we’re doing sort of the same thing, again, on a rolling basis with all of our technology in the Department of Defence. [07:22] Richard Sullivan: I think they have to, right? It’s about affordability and whether what you have that’s in existence, as in the legacy state is something that can meet the mission needs. It all comes down to, do we have the things in our systems that are able to satisfy the missions as effectively as possible and bring our uniformed service members home? At the end of the day, that’s really what we’re doing. And for the things that have the priority because of the advancements in technologies on the adversary side, that is sort of what prioritizes what we do first. And then the second part of it is how going to - we were talking about is - how can we leverage the advancing autonomy that you can have now these sensors have more, I’ll say smarts to them, where it isn’t necessarily, in your example of the F14 backseater, having to hit all the buttons. If a lot of those buttons are ‘you have to hit this button if this happens’, that to the extent that we can start automating more of those functions if a set of parameters come in. And in every case, if those parameters would be a button press, let’s take that crew workload out, the function of having a crew member do something if in that case, you would always take that action. And so, that thing is that the ability to influence and implement autonomy in order to make missions more effective, to make our products safer, in order to react at machine speed and in many cases you have to, are things that we’re looking at as well. [09:02] Lee Hutchinson: So, that brings up something that I really wanted to ask you about because when you get into this question of force management autonomy, let’s sort of change gears here a little bit. Could you break down exactly what the nuts and bolts of force management autonomy mean? Because it feels like then we get into having to trust algorithms, and I’d like to know, first off, what level of autonomy are we talking about? And how are you guys validating algorithms so that they can be trusted in this decision loop? [09:31] Richard Sullivan: Let me start off with the autonomy question. Autonomy, you can think of it as a layered approach that we’re taking at Northrop Grumman with the technology that we call DARC. And DARC specifically stands for Distributed Autonomy Responsive Control. DARC has the ability to do systems of systems optimization. So, that’s the force level that you’re talking about, where you have an objective that you want for admission of multiple heterogeneous vehicles - or in other words, different vehicles - with different sensors, and each of the vehicles now have their own constraints. Depending on when they took off how much gas they still have, how much hours in the seat does the pilot have, and so on and so forth. Those are all variables, which can help you optimize. If you knew you can optimize which is the best vehicle with the best sensor to perform a mission. The mission could be a surveillance mission, watch this point on the ground, and so on and so forth. With one vehicle, it’s really easy to manage that. With four vehicles, a human now has to understand all these parameters, which are all constantly changing. The vehicles’ position is constantly changing, the available gas, the human fatigue, all those things are constantly changing. If there are any failures on the aircraft that are constantly changing. You get to a point at the force level with these different variables it starts becoming a complex problem, where you can have the computer decide which is the best vehicle with all those constraints to perform that mission at that time. And as vehicles are moving and the dimensions are moving, vehicles are returning to base new vehicles are entering the area, it is a completely dynamic scenario. And all of those are something that from a force management, we see autonomy being able to manage that through a concept called objectives and constraints. You provide a set of objectives, you provide a set of “Don’t go past this border. Don’t do this.”, and then the computer can optimize for that. [11:43] Lee Hutchinson: You have to have some level of human control in the decision-making loop here too because out of everything that the US military does, logistics is probably like the true hidden talent, even better perhaps than fighting wars. But there are sometimes mistakes, like the table of organization might say you’ve got 15 Humvees or 15 Mraps but you actually have 13 because two of them are down and nobody logged incorrectly, or whatever. So, there has to be some fuzz in here also to where you can shift from one decision to another because of non-accounted factors, right? [12:18] Richard Sullivan: Right. And so, that’s the responsive control part of it, and that’s why it’s actually in the title of our product is that you want the human to be on the loop not in the loop. So, the human or the operator can be making real-time adjustments based on real-time information. And it’s exactly, with your example, if something just broke down that is - if the Humvee broke down that’s the one that was assigned to do this task, and it needs to be done in the next couple of seconds. A human operator can make adjustments to the plan, absolutely. And then there’s vehicle level autonomy, which is how is the vehicle taking its onboard sensors? And you can think of it as things like sensing avoid systems, things like auto landing. If you have a vertical takeoff, like a vertical-craft, unmanned vertical-craft wanting to land in an unsurveyed area, can you provide it with the right sensors, the lidar, Vidar sensors? Maybe you have the digital terrain information, and enable that vehicle to land safely. So, through a combination of sensors and fusion, the vehicle is able to autonomously do something that is not to a plan is to an objective as well. [13:34] Richard Sullivan: And then you have payload management as well, so that’s sort of even one level deeper. If you have a variety of payloads that have the ability on a platform that’s got an electro-optical sensor, it may have an electronic warfare sensor, it may have a radar sensor, how can you detect something with one of those sensors and say, “Wow, I think I got something, EOIR ball. Can you go and take a look at what my radar just picked up?” And then getting multiple views with different phenomenologies enables you to positively ID something as opposed to having one vehicle say “I see something, let me get an operator to call another vehicle that can get there in five minutes and look as well.” So, there are three sorts of stratifications of autonomy that we look at, and how do we enable all of them to make a mission more effective? [14:32] Richard Sullivan: Now, everything I said is quite complicated, and how do you know then when you’re invoking autonomy - and this is to answer your second question - when you’re invoking this autonomy, that it’s doing what you expect it to do? And your point on trusted algorithms it just is so important for the understanding and the validation that gets put in place. For us, as an example, and if your listeners go in lookup the X-47B from Northrop Grumman landing on an aircraft carrier, the first time that was done was a hundred thousands plus times or hundreds of thousands of times in a simulated environment, adjusting every one of the parameters to understand how is the aircraft going to react in the carrier having more dynamic sea conditions, how is the aircraft in different wind conditions, and so on and so forth. And you have validated simulations and validated outputs that give you confidence and trust that under any input condition or any realizable input condition, that the vehicle is going to respond within a set of expectations. [15:42] Lee Hutchinson: That’s sort of the hardest problem about machine learning in any field too, it’s computers don’t intuit, at least not yet. They’re as good as the training cycles that you feed them. They don’t draw conclusions, they simply synthesize only what you give them. And sometimes in odd ways. [16:00] Richard Sullivan: They’re interpolating or extrapolating, depending on what you want them to do. And then sort of the concept, as I mentioned, it is important in how we set the constraints. They can make the vehicle force level management example I gave earlier, the vehicles can operate within their flight envelopes. We want them to not hit another vehicle, we want them to stay within a certain area, and then the decisions that the autonomy is making is within those constraints. [16:31] Lee Hutchinson: I want to change gears just a little bit, I have sort of one other line of questions I want to run down here as we’re getting into sort of the back half of this. And we had previously talked with Northrop Grumman’s CTO, Scott Stapp, who did a great job talking to us about the battlefield internet of things and instrumenting all of the layers of the battlefield layer cake, as it were. And I want to ask something that kind of bridges the gap between his interview and this discussion. Can you give me some idea about the, I guess I call it, the propagation time of important information during combat up from the individual unit level to the command level and the existing complexities of overcoming the fog of war and getting battlefield commanders the correct mission information? And then talk about where this idea of the open system armed joint force can cut delays out of what the current picture is? [17:27] Richard Sullivan: The first question, Lee, is something that I would really have to defer because what we’re asked to do is to provide the products that can collect the information, and that can distribute the information. But I’ll say the value proposition of how that all works is something that the users are the best people to answer for that. But what I can answer is the concept of your second question, which is enabling the information. So any one of our vehicles are nodes in that internet of things concept, and how the information on any one node can be shared through a ruleset that you would say, “This information is shareable to everybody, this information is shareable to other fighters, or this information should be shared to the people that are going into this area of regard.” Sharing everything with everybody all the time is a bandwidth problem in general. So, having some stratification of how you share what data to which users are something that is one important thing because of the limited bandwidth that just exists. [18:41] Richard Sullivan: The other one is then enabling that relevant data to be integrated into the onboard, the sensors on the platform - we call the organic sensors. And those offboard sensors or those external sensors are provided through the nodes. And the ability to time register that data, and so if you have this radar detection, how do you know it’s the same radar detection caught by another platform at some other place? How do you register that so that you can truly fuse the data? And that’s all through a variety of processes, but it is that level of data integration that we see artificial intelligence machine learning taking everything to the next level. Definitely, things that our team is working on the advanced networking and advanced communications, both of them are critically important. You have to be connected to create a network and then to have the network in place to share the data. So, it’s a combination of both the autonomy to put the vehicles in positions where they could be valuable and relevant to one another, the communications in place that you can have a reliable connection. Just my phone hung up here a few seconds ago for who knows why. I mean, my phones on the table, it’s not even being touched. But it is normally a reliable connection. But even things that are reliable are not always reliable. So, then what do you do about it? And then how do the systems react to comms that go down? [20:24] Lee Hutchinson: You have to design all of these for maximum reliability because notionally these systems are not going to be operated in a clean environment, in a lab, in an office, these are systems that need to operate potentially in a forward-deployed area, potentially without a lot of logistical support, and by 18, 19, 20-year-olds who are under stress and potentially operating on a lack of sleep and all the other stresses of being deployed. [20:51] Richard Sullivan: Absolutely. And how do we make that as easy as possible, yet allow the operators to still have complete control? [21:00] Lee Hutchinson: It’s kind of a nuts and bolts question then. We’re talking about the Joint Force and we see the word joint as sort of the leading word across the whole spectrum of military acronyms. As technology and communications advance it gets easier or I guess, at least “easier” to coordinate different branches of service together. Like having the army and the navy both take part in a specific engagement with complementary roles. And this requires all of that logistics skill that the military is excellent at. Getting the army and the navy to the point where they can coordinate at the unit level is hard, but we can do it. But putting aside the paperwork and planning aspect and looking at the actual bits that have to be flipped back and forth for all of that to happen - the files, the protocols - sharing information across a joint force itself is a tremendous logistical challenge. And it’s not even figuring out the chain of command, there are actual technical meshing issues that have to be overcome here across service branches, right? [22:00] Richard Sullivan: I think that some of the systems are going to have definitely an easier time implementing that ability to be connected as one. The different technologies that we see, some of it is you want to build a purpose-built radio that can connect everybody. To field that across everything that exists in all the services is a major undertaking. But at the end of the day, these systems are communicating. So, for example, cell phone services are using different bands and it’s not much different than how the military radios are. But yet, you have no idea what service I’m talking on because there’s something in the middle that’s translating the different cellular companies’ frequency bands, and making us all connect. That concept is similar to what we’re looking at bringing forward is how do you leave the existing aircraft, vehicle, ground vehicle, surface ship comm system in place, but enable that data to not really care whether that was a UHF, whether that was a TT &T, whether that was a SATCOM datalink, but it really cares more about the data? And there are going to be some things that have to translate between fourth-gen and fifth-gen and all that kind of stuff. Those are products that we’re making, and that we’re developing, and in some cases, we’re flying on vehicles. It is to be able to bring these different frequency bands together, and then connect the data together. [23:42] Richard Sullivan: We have radio systems within our mission system sector that does that. But really is though the concept of bringing all the data together all the relevant data, and then how do you bring the relevant data just to the vehicle so that the time relevant stuff is delivered to the vehicle? So, again, if you’re driving, or I’m driving down the freeway and there’s an accident, and you’re two minutes behind me, if I can give you that information and get off the road, that accident won’t affect you. Or it’ll affect you less than it affected me. How do we take sort of that concept to the next level, to the 100th power? For all the things that we have that are deployed, how can we leverage the information that they have, scoop out what’s relevant for another air vehicle? So, every one of our vehicles that are sort of flying vacuum cleaners collecting a broad spectrum of data, and how is that data relevant to a surface ship? How is that data relevant to a ground force? How is that data relevant to another air platform? There’s data that’s relevant - there’s likely, not guaranteed and not all the time - but there’s likely data that’s relevant, that’s going to help the other system be more effective and more efficient. That is a great problem to solve. You can think of every one of the platforms - the airborne platforms, the ground platforms, the surface platforms, the subsurface platforms - collecting some data that is relevant to somebody else’s mission, at some point, within that mission. And being able to connect the time or the temporal, this could affect you two minutes from now or two hours from now and this information should go into your plan that you should dynamically update given this information that you didn’t have before. And the information is already being collected. It’s already being collected on platforms, it’s already being collected on every one of the platforms. And it’s just about connecting the relevant data, whether directly to the platform because you may already be in a force package with somebody else, that one aircraft sees something, says, “Oh, my gosh, I got a detection.” through their line of sight comlink. The other vehicle could say, “Let me use my sensor and look at that same detection as well. Is it really a detection or not?” [26:16] Lee Hutchinson: This sounds extremely difficult to get right because you have to serve multiple masters as it were. Because if I’m on the receiving end of this data, and I’m a general, at the division level, what I care about is going to be considerably different than what other users of these systems - like the brigade or battalion or even the company level - are going to care about. Sometimes drastically different information, right? [26:39] Richard Sullivan: Yeah. And the answer to that is yes and that’s where that AIML and that filtering of relevant information becomes really important. [26:48] Lee Hutchinson: And I guess the trick is to figure out where in the stack you slot the smarts. Do you make the vehicle aware of what information it should and shouldn’t be transmitting or does the vehicle transmit everything it can and there’s a system in the middle that gates the information out? Like where in the stack do you put the smarts? [27:07] Richard Sullivan: So, you actually bring up a great point, which is the concepts are optimized with distributing the autonomy, some level of autonomy on the vehicles themselves. It can manage some of its sensor information knowing that there are other aspects of a mission that it can provide information for. And then there’s autonomy, so distributing the autonomy within the vehicles even small vehicle - any scale of vehicle - and having it be able to have some autonomy to reduce the latency of information is something that’s important. Yet, providing the information, in your example, back to the mission commander, the person at the air operations center, so they have knowledge of what is going on and what the vehicles are doing as well. [28:01] Richard Sullivan: I usually have given an example because it gets everybody kind of riled up. So, as an example - and it’s football season right now, and so I’m just going to give a football example - every one of the folks go out with a play. So let’s just talk about the offense. The offense goes out with a play and they are looking at what the defense is doing. Everyone, the people on the offensive line, the tight ends, the running back, the wide receivers, they’re all adapting themselves a little bit. They’re not grossly changing the play. If the quarterback goes out, and let’s say, the snap goes over his head, the objective of the football team is to score a touchdown and go to the end zone. In that circumstance where the play kind of breaks down, they all just don’t run to the end zone, which is ultimately what the objective is. What they do is they take a smaller objective, and they look at ways to get open. The running back will be a short pass, there may be a wide receiver that still runs down. What they do is they’re continuously adapting to what the defense is doing. And when we talk about the individual autonomy within the vehicles themselves, they’re still constrained, they’re not just a totally open loop. What they’re doing is they’re operating within a different set of constraints, so that the system can be optimized. And it’s that level of information that you want to share. For example, you may always want to share your own precise position information, so that every other vehicle knows exactly where you are in 3D space. You may also want to share “Here are some of the sensor detections that I got that are in a relevant range.” Maybe not every sensor detection, but sensor detections within let’s just say 50 kilometers. And that then doesn’t burden the comm systems, that then doesn’t burden I’ll say the information backbone. But in an instance where the plan can be run, there’s very little information that needs to be set. Because the plans are made in advance to execute a mission optimally with the information that they had when they made the mission. And just like every plan, there’s new information that comes in when you start, you shouldn’t execute the exact plan, you should figure out how to dynamically adapt that. And that’s really what we see across air, sea, land, and subsurface that is something that’s available to be done. [30:33] Lee Hutchinson: That makes a huge amount of sense. The only problem is I live in Houston, so when you talk about football teams that operate successfully, I don’t know what you’re talking about, man. Sorry. Sorry, to my Texans. So, we’ve talked about this, we’re near the end of our time here, and I know that the world of military procurement is a slow world, the DOD. As we’re discussing these open systems here, and all of this upcoming technology, we’re not necessarily talking about anything that’s going to be rolled out in the battlefield, like in the next 90 days or anything crazy like that. But as with so much military technology, or as with so much technology originally envisioned in a defence role, there are lots of real-world applications, civilian type applications for this concept of this joint integrated force using open systems, right? [31:28] Richard Sullivan: I think that’s really a key takeaway is a lot of these technologies - and I’ll say that the defense industry is developing things that are generally very costly and they take more time because they’re inventing a lot of stuff. And I think as a result of these inventions, we’re going to see how does, for example, cargo integrate with the national airspace. There are numerous unmanned systems out there today. How do we take what’s available today, and then adapt the ruleset that can enable the package delivery systems to bring things to your house? How do we look at the solutions for integrating unmanned air vehicles in particular to national airspace or international airspace? How do those rulesets transcend commercial capability as well as the military capability? And that’s something that is part of the conversation, and that’s part of the things that my colleagues are driving and I think that’s good, ultimately, for everybody. How do we look at making automobiles safer? I think a lot of folks see the sensors on the vehicles that they’re doing traffic detention, blind spot detection, those are all I’ll say versions of things that were used to land vehicles and things like that in the past. So, the implementation in the commercial world is different, but the concepts are very similar. And I think that the concepts that are being developed on the defense side will make everything that we do in our lives safer and more technologically advanced in five to ten years. Now the speed is getting faster, and the implementation of agile development has transcended into the aerospace and defense industry. So, I think implementation and delivery of products quicker is something that is transformed within, in the military services that is going to enable faster deployment of capability to the warfighter. And I think that is really important that we are delivering capability at the speed of relevance. [33:39] Lee Hutchinson: Excellent. Okay. My guest today Richard Sullivan, Vice President Program Management at Northrop Grumman. Richard, thank you for taking the time to talk. [33:46] Richard Sullivan: No problem at all. I really appreciate it. [33:51] Lee Hutchinson: Thanks for listening to this two-part special edition of the Ars Technicast. For more on this topic and for a whole range of articles and videos about how science and technology is shaping our world, stop by arstechnica.com.