By Hyperspace Challenge Editorial Staff

Since being founded in 2015, data-science company RS21 has been known for doing some pretty impactful work.  

The startup, based in Albuquerque, New Mexico, has – among many other things – used its data science modeling to help municipalities increase community safety and prepare for natural disasters; help healthcare workers predict disease risk and prioritize intervention resources; and help scientists understand how to improve the resiliency of communications and power grids. 

One of its current initiatives is the COVID-19 Urban Health Vulnerability Index, which the company built in March 2020 to integrate disparate data sources and help identify urban populations at high risk for the Coronavirus.

As disparate as those applications may seem, the company’s work has always had one important commonality: it has generally been applied to the improvement of communities – not just in the United States, but internationally. 

With its recent participation in the Hyperspace Challenge accelerator, as a part of the program’s 2020 cohort, which concluded with a December pitch competition that RS21 won, the company is going to be able to add an entirely new community to its service list: Space.

The RS21 team that participated in Hyperspace Challenge, including Chief Technology Officer Kameron Baumgardner and Senior Data Scientist Dr. David Dooling, recently sat down with Hyperspace Challenge to talk about how a company that has never worked in space tech found itself with the winning pitch in an accelerator designed specifically to promote innovation in places far from earth. 

RS21 had never worked in the space domain prior to participating in Hyperspace Challenge, yet your winning pitch explained how you can predict failures in satellites. Where did this idea come from?

KB: We have previously applied machine learning to the medical field to help better understand patient outcomes in cancer treatment. Our team was able to take this concept and essentially approach a satellite like a potential patient, predicting what could happen to the satellite based on what sensors and imagery are telling us. 

Making a shift to working in space can present significant challenges for a company. What are they?

KB: It is really challenging to try to get into the space mission on a four- to six-week timescale. There are entirely new things we needed to learn, like how to deal with radiation-flipping data and limitations in satellite hardware. Most satellites have the processing power equivalent of a 1980s laptop. But I think Hyperspace Challenge has connected us with the right partners to get us started down the right path, and we’re bridging the gap with support from other participants in the cohort who are more experienced in those areas. 

What convinced you Hyperspace Challenge would be productive for you, and that there was, indeed, an application for your tech in space? 

KB: At the outset, we understood the focus of the cohort was trusted autonomy in space. But what really got us kick-started was our participation in the discovery webinars that Hyperspace Challenge hosts prior to submitting a cohort application. These meetings connected us to the mission-area specialists who developed the government needs the cohort would be tackling. That way we could explore the application of our technology to these needs in advance. One of the specialists that really struck a chord with us was Michelle Simon at the Air Force Research Lab, who talked about the need for better predictive capabilities for when or why satellites might fail. 

At what point did you know that applying to the cohort would be worth your time? 

DD: During the discovery webinars it became clear that the quest of the U.S. Space Force is to digitize space, because you can’t just go out into space and manually look to see what’s going on. But you can receive non-stop signals of relevant and trusted data from the equipment that’s out there, and that’s the perfect scenario for machine-learning algorithms. It’s actually much harder to make predictions on things like patient health and voter turnout, because there are so many exogenous factors you can’t control. Once we learned more about what the Space Force needs – in this case to determine the health of a machine – we realized that our techniques are even better suited for space assets that are spewing out data because of the pace at which the machines are monitoring. You can’t go and draw blood from a patient every five minutes. But you can basically do the equivalent for a satellite.  

What did you learn and how did your idea evolve once you got into the program?

KB: We had some assumptions about how we could approach the problem. One of the major ones was that there would be this massive deluge of [satellite] data that we could go out and pull down and play with. As it turns out, the federal government is protective of their satellites. But using a dataset published by NASA for other purposes, we were able to mock a satellite system and prove out our algorithm. There were some other incorrect assumptions we had about the type of computing architecture and hardware that is on the satellites, as well as the types of data they offer. We also learned a lot about how that data is informing the decision-making process.  

What advice would you give to other companies that don’t consider themselves a space company when it comes to thinking about how their tech could be applied to the space domain? 

DD: I would say if you’ve got a hunch that there could be some application, just jump in. We were met with open arms by the Hyperspace Challenge team, the Air Force Research Lab and everyone in the cohort they brought together. We received nothing but support from everybody involved to help us bridge the gap.