
Speakers took to the stage at last week’s Geneva Dry’s AI, Digitalisation and the Dry Bulk Workforce session to the strains of REM’s It’s The End Of The World As We Know It – a song choice that moderator Cynthia Worley of Sedna described as carefully selected. “Whether you think it’s the prize or the plague, you’ve already got it,” she told the room. “And now it’s just learning how to live with it.”
The session opened with a rapid-fire question to the panel: are shipping companies deploying AI faster than they are defining accountability for it? The room split. Alberto Perez, global head of maritime commercial markets at Lloyd’s Register, and Jonathan Canaan, global ocean freight director at ADM, said yes. Alex Albertini, CEO of Marfin Management, and Ingrid Kylstad, managing director of Klaveness Digital, said no. Scott Bergeron of Oldendorff Carriers offered perhaps the most honest answer of all: “Most of us are probably still trying to figure out how we’re going to deploy AI and not yet worried about the governance of AI.”
Worley opened with a warning that landed visibly in the room: in less than 100 days, the EU AI Act enters full enforcement, carrying fines of up to €35m or 7% of global annual turnover for companies unable to demonstrate governance of their AI processes. A show of hands revealed that almost nobody present had heard of it before that week.
Perez outlined Lloyd’s Register’s framework for AI accountability, arguing it must be viewed as part of a wider system rather than in isolation. “The proper definition of accountability needs clarity in three fronts: define the function being deployed, define the decision boundaries of that functionality, and define the performance limits of that deployment,” he said.
Bergeron drew a pointed analogy with the radar. “Go back a few decades when ships didn’t have radar – and imagine that device that lets you see in darkness, in fog, 14 to 20 miles. Wow. But there have been plenty of radar-assisted collisions. So it wasn’t the final solution.” He confirmed AI is already on the BIMCO work programme, with a contractual clause in development. But his deeper concern was longer-term: “What happens 10 years from now when there are no more subject matter experts? Who’s going to be around to question the output of AI?”
Kylstad pushed back on the radar comparison. “I actually think AI is more transformational than the radar. And the reason is because we don’t understand it. Even the creators of the language models don’t understand how they reason or arrive at conclusions.” She described a recent decision not to hire a business analyst because Claude and ChatGPT subscriptions, combined with staff who had become skilled at using them, could do the job. “If someone told us 12 months ago what we could do with AI today, we wouldn’t have been able to predict it.”
The panel was unanimous that shipping remains a people business, and that AI should amplify rather than replace human capability. Albertini was emphatic: “AI is not an opportunity to fire people. It’s an opportunity to grow with the same staff, to do the same things and use the leverage of their knowledge and experience to make them superhumans.” He added a wry observation on the double standard applied to human versus machine accuracy. “We’re all quite confident hiring human beings who are 70% right.” Kylstad’s response drew laughter: “I would say 51% in many cases.”
Bergeron echoed the sentiment, noting that for 30 years the industry has been told brokers and charterers would be replaced by technology. “I don’t think that’s happening anytime soon.” Canaan agreed, warning against treating AI as purely a headcount reduction tool. “If you continue to look at AI as a replacement for workforce alone, you’re going to stay in that cycle.”
But Albertini introduced a concept that resonated with the room: the saboteur syndrome. When AI projects are handed to employees who fear for their own jobs, he warned, those employees can become the biggest internal obstacles to adoption. “They will fight AI so much that they will try to sabotage a project to make sure it’s not happening.” Change management, he concluded, is now as critical as the technology itself.
On the EU AI Act, Bergeron was candid about his preparation – he had consulted ChatGPT the week before the panel to understand it. He was relaxed about the fact that regulatory frameworks inevitably lag technological development. “Most regulatory structures follow developments. There are consequences and then there are amendments. I don’t see that as the fear.”
Kylstad acknowledged the philosophical difficulty of assigning accountability when an AI system makes a recommendation that a human acts upon without fully understanding its reasoning. “It’s intellectually lazy to say the person in the loop is always responsible. That person doesn’t necessarily understand the reasoning behind the recommendation.” She urged companies to ask hard questions of their AI vendors about where their models are weakest – and build processes around those known failure points.
Perez noted a consistent finding from Lloyd’s Register’s Digital Maturity Index: companies routinely perceive themselves as less AI-mature than their peers, even when using the same tools. “It’s not the same having a tool as extracting value from the tool,” he said.
When the audience asked panellists for their most jaw-dropping AI use case, the answers were revealing. Albertini described using Complexio to map unstructured email data, surfacing company-wide insight that left him feeling his own questioning wasn’t deep enough to exploit it fully. Bergeron described using Claude to transform raw vessel inspection reports – photos, notes and all – into structured, professional documents in minutes. Canaan cited Signal Ocean’s ability to change the mindset of his trading team from reactive to strategic. Kylstad highlighted AI-assisted MVP development, allowing non-coders to build tools at a fraction of previous cost and time.
Audience member Jamie Barrow from Trafigura provoked a sharp exchange on the economics of AI agents, asking what happens when an AI delivers a qualified decision at $1 versus a human at $150. Albertini offered the sharpest caution: “If today it’s $1 and tomorrow it’s $1,000, at some point we’re going to be stuck with a situation where people can’t afford it. If you start doing this arbitrage, I would wait a little bit.”
Worley, from Georgia like REM, closed the session by returning to the song that opened it. “I know they keep repeating that line – it’s the end of the world. But what I want you to remember is the last line: and I feel fine.”
This article was written by Claude.