News

April 30, 2026

Scholar outlines how GPT-4 is reshaping enterprise automation design

tech sector

By Nnasom David

A researcher has examined how the release of GPT-4 is influencing enterprise automation systems, arguing that organisations must rethink how artificial intelligence is integrated into decision-making processes.

Sheriff Adepoju, a student at Prairie View A&M University in the United States who focuses on infrastructure design and machine learning, said traditional automation systems have long struggled with unstructured and ambiguous data despite performing well in routine tasks.

He noted that the introduction of GPT-4 has changed how organisations can process information such as emails, service requests, and policy documents, which often contain inconsistent or incomplete inputs.

“There was always a hard border,” Adepoju said. “On one side, workflow engines and business rule tables. On the other hand, organisations quietly handed everything to people because the software could not cope with ambiguity.”

Adepoju proposed a three-layer model for enterprise automation, consisting of interpretation, verification, and accountability. In the model, language models are used to interpret unstructured data, while verification is handled by existing systems such as policy engines and workflow controls. Human oversight remains in place to review outcomes and ensure accountability.

He described the approach as “bounded machine assistance inside governed systems,” warning that removing oversight could undermine reliability and increase operational risk.

The researcher also criticised common adoption practices in which organisations deploy generative AI tools as add-ons to existing systems without redesigning underlying workflows. He said such approaches fail to address the structural changes required for effective integration.

According to him, one of the main challenges in enterprise environments is not decision-making itself but the difficulty in clearly defining problems due to conflicting or incomplete information.

Adepoju further suggested that organisations should expand how they evaluate automation systems beyond accuracy alone, recommending metrics such as traceability, error recovery, latency, reversibility of actions, and cost per decision.

He noted that while rule-based systems tend to fail in predictable ways, language models can produce convincing but incorrect outputs, making proper safeguards essential.

His analysis has been discussed in academic and practitioner circles focused on enterprise AI deployment, particularly on how organisations can balance automation with human control.

Adepoju concluded that the key issue for enterprises is distinguishing between systems that support human decision-making and those that take over decision authority.