In fact, the main planning tool used to choreograph a flight deck looks a bit like a board game. To position aircraft and plan out flight schedules, Naval officers typically gather around a metal table, dubbed the “Ouija board.” These “deck handlers” move around small metal plane-shaped cutouts on a diagram of the flight deck that’s etched into the tabletop. Color-coded thumbtacks and pins, placed on cutouts, identify planes that need refueling or maintenance.
While the Ouija board has worked for decades to help human planners reshuffle flight decks, the system depends on people to keep track of all the moving parts. When lining planes up to launch or land, people in the tower radio down to the deck crew and pilots to confirm locations and prioritize planes with varying fuel levels and missions. When a ship’s equipment malfunctions, a person needs to reconfigure the entire deck on the fly to keep planes on schedule.
It’s a lot of data to process, and MIT’s Mary (“Missy”) Cummings, associate professor of aeronautics and astronautics, says computers can help people cut through that data to draw up faster, safer and more efficient flight-deck plans. Cummings and her students in the Humans and Automation Lab have designed a computer interface, called the Deck operations Course of Action Planner (DCAP) that works with humans to track incoming flight data and create new deck operation schedules.
The monitoring system may also cut down on the number of crew members needed to staff the deck, decreasing crowding and the risk of accidents.
“People are working elbow-to-elbow with vehicles that could potentially kill them,” Cummings says. “So the question is, can you actually figure out a better way to move the flow of traffic around a deck to mitigate the chances of someone being hit?”
Planning for failures to launch
To find the answer, Cummings and her students first identified key data that influence flight-deck traffic, including aircraft fuel levels, flight schedules and the status of deck machinery, such as launch catapults. The team then worked out a planning algorithm that creates an optimal schedule based on different scenarios.
The researchers paired the algorithm with a user-friendly computer interface, with a main display showing an overhead view of the aircraft carrier and the current positions of ground crew, vehicles and planes on deck. A side panel lists the type and number of aircraft in line to land and launch, along with their flight schedules, while a display at the bottom keeps track of machinery and issues alerts when failures arise.
In a simulation, the algorithm was able to suggest new schedules based on changing data, such as a catapult failure or an incoming fighter jet running low on fuel.
“How to build a full schedule quickly that can compensate for failures … is something people cannot do very well,” says Jason Ryan, a PhD student in the Engineering Systems Division and a member of the DCAP team. “You get to a certain number of vehicles and it doesn’t work. But that’s something that algorithms are exceptional at. They [handle] all those complications faster than you can think.”
It’s not all about automation
Ryan adds that algorithms also have their faults — they can only do what they’re programmed to do. For that reason, DCAP is not meant to be a fully automated system, but is intended to provide informed suggestions that a human operator can accept or change. That way, the system allows room for scenarios that only humans, from experience, can anticipate.
For example, a deck handler may see on the schedule that a pilot with a history of shaky landings is due to land. Based on this anecdotal knowledge, the handler may choose to move the plane up in the landing schedule.
“If he botches it [the first time], we have enough time to try and give him another try,” Ryan says. “That’s something that’s hard to program into systems, but it’s something that a human can look at and understand.”
In order to fully employ the system on a working aircraft carrier, Cummings says the carrier environment — meaning the crew, the equipment and the aircraft — would have to be outfitted with sensors, such as radio-frequency identification (RFID) tags.
As a proof of principle, the group recently performed a successful live demonstration for the Office of Naval Research, which is sponsoring the project as part of a five-year grant. During the demo, DCAP was able to track and control the positions of miniature, unmanned vehicles. As researchers programmed scenarios into the system, such as a catapult failure or a fuel shortage, DCAP reordered the schedule of landings and takeoffs, and the vehicles received the message and lined up accordingly.
Mark Steinberg, program officer for the Office of Naval Research, says in the not-too-distant future, DCAP could be especially helpful for dealing with a growing fleet of unmanned aerial vehicles (UAVs) — drones that would already be equipped with plenty of onboard sensors and communication hardware.
“The long-term goal, is, we would like to have unmanned autonomous systems onboard, which requires special procedures where you’d have to clear everything off the deck and implement certain safety measures,” Steinberg says. “We would like to make this as easy as possible as we introduce more and more UAVs onto carriers.”
Beyond the flight deck, Cummings says the system could be used in a number of environments where there are a lot of moving parts and people.
“It could be used in a commercial airport … or in the trucking industry,” Cummings says. “Boeing has suggested it could help them improve efficiency of their aircraft-manufacturing processing line. If we could show that this could be done in a carrier environment, then everything else would … be a piece of cake.”