• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

Analysis, Assessment, Rebuild

Admin by Admin
August 27, 2025
Home Software
Share on FacebookShare on Twitter


Till lately, I held the idea that Generative Synthetic Intelligence
(GenAI) in software program growth was predominantly fitted to greenfield
initiatives. Nonetheless, the introduction of the Mannequin Context Protocol (MCP)
marks a big shift on this paradigm. MCP emerges as a transformative
enabler for legacy modernization—particularly for large-scale, long-lived, and
complicated methods.

As a part of my exploration into modernizing Bahmni’s codebase, an
open-source Hospital Administration System and Digital Medical Report (EMR),
I evaluated the usage of Mannequin Context Protocol (MCP) to help the migration
of legacy show controls. To information this course of, I adopted a workflow that
I confer with as “Analysis, Assessment, Rebuild”, which supplies a structured,
disciplined, and iterative strategy to code migration. This memo outlines
the modernization effort—one which goes past a easy tech stack improve—by
leveraging Generative AI (GenAI) to speed up supply whereas preserving the
stability and intent of the present system. Whereas a lot of the content material
focuses on modernizing Bahmni, that is just because I’ve hands-on
expertise with the codebase.

The preliminary outcomes have been nothing wanting exceptional. The
streamlined migration effort led to noticeable enhancements in code high quality,
maintainability, and supply velocity. Based mostly on these early outcomes, I
imagine this workflow—when augmented with MCP—has the potential to grow to be a
sport changer for legacy modernization.

Bahmni and Legacy Code Migration

Bahmni is an open-source Hospital Administration
System & EMR constructed to help healthcare supply in low-resource
settings offering a wealthy interface for medical and administrative customers.
The Bahmni frontend was initially
developed utilizing AngularJS (model 1.x)—an
early however highly effective framework for constructing dynamic internet functions.
Nonetheless, AngularJS has lengthy been deprecated by the Angular workforce at Google,
with official long-term help having resulted in December 2021.

Regardless of this, Bahmni continues to rely closely on AngularJS for a lot of of
its core workflows. This reliance introduces important dangers, together with
safety vulnerabilities from unpatched dependencies, problem in
onboarding builders unfamiliar with the outdated framework, restricted
compatibility with fashionable instruments and libraries, and diminished maintainability
as new necessities are constructed on an growing old codebase.

In healthcare methods, the continued use of outdated software program can
adversely have an effect on medical workflows and compromise affected person knowledge security.
For Bahmni, frontend migration has grow to be a vital precedence.

Analysis, Assessment, Rebuild

Determine 1: Analysis, Assessment, Rebuild Workflow

The workflow I adopted is named “Analysis, Assessment, Rebuild” — the place
we do a function migration analysis utilizing a few MCP servers, validate
and approve the strategy AI proposes, rebuild the function after which as soon as
all of the code era is finished, refactor issues that you just did not like.

The Workflow

  1. Put together an inventory of options focused for migration. Choose one function to
    start with.
  2. Use Mannequin Context Protocol (MCP) servers to analysis the chosen function
    by producing a contextual evaluation of the chosen function via a Massive
    Language Mannequin (LLM).
  3. Have area specialists evaluation the generated evaluation, guaranteeing it’s
    correct, aligns with present mission conventions and architectural tips.
    If the function will not be sufficiently remoted for migration, defer it and replace
    the function listing accordingly.
  4. Proceed with LLM-assisted rebuild of the validated function to the goal
    system or framework.
  5. Till the listing is empty, return to #2

Earlier than Getting Began

Earlier than we proceed with the workflow, it’s important to have a
high-level understanding of the present codebase and decide which
parts must be retained, discarded, or deferred for future
consideration.

Within the context of Bahmni, Show
Controls

are modular, configurable widgets that may be embedded throughout numerous
pages to reinforce the system’s flexibility. Their decoupled nature makes
them well-suited for focused modernization efforts. Bahmni presently
contains over 30 show controls developed over time. These controls are
extremely configurable, permitting healthcare suppliers to tailor the interface
to show pertinent knowledge like diagnoses, remedies, lab outcomes, and
extra. By leveraging show controls, Bahmni facilitates a customizable
and streamlined consumer expertise, aligning with the various wants of
healthcare settings.

All the present Bahmni show controls are constructed over OpenMRS REST
endpoint, which is tightly coupled with the OpenMRS knowledge mannequin and
particular implementation logic. OpenMRS (Open
Medical Report System) is an open-source platform designed to function a
foundational EMR system primarily for low-resource environments offering
customizable and scalable methods to handle well being knowledge, particularly in
growing nations. Bahmni is constructed on prime of OpenMRS, counting on
OpenMRS for medical knowledge modeling and affected person report administration, utilizing
its APIs and knowledge constructions. When somebody makes use of Bahmni, they’re
primarily utilizing OpenMRS as half of a bigger system.

FHIR (Quick Healthcare
Interoperability Assets) is a contemporary commonplace for healthcare knowledge
change, designed to simplify interoperability through the use of a versatile,
modular strategy to characterize and share medical, administrative, and
monetary knowledge throughout methods. FHIR was launched by
HL7 (Well being Stage Seven Worldwide), a
not-for-profit requirements growth group that performs a pivotal
position within the healthcare business by growing frameworks and requirements for
the change, integration, sharing, and retrieval of digital well being
data. The time period “Well being Stage Seven” refers back to the seventh layer
of the OSI (Open Techniques
Interconnection) mannequin—the appliance
layer
,
chargeable for managing knowledge change between distributed methods.

Though FHIR was initiated in 2011, it reached a big milestone
in December 2018 with the discharge of FHIR Launch 4 (R4). This launch
launched the primary normative content material, marking FHIR’s evolution right into a
steady, production-ready commonplace appropriate for widespread adoption.

Bahmni’s growth commenced in early 2013, throughout a time when FHIR
was nonetheless in its early levels and had not but achieved normative standing.
As such, Bahmni relied closely on the mature and production-proven OpenMRS
REST API. Given Bahmni’s dependence on OpenMRS, the supply of FHIR
help in Bahmni was inherently tied to OpenMRS’s adoption of FHIR. Till
lately, FHIR help in OpenMRS remained restricted, experimental, and
lacked complete protection for a lot of important useful resource sorts.

With the current developments in FHIR help inside OpenMRS, a key
precedence within the ongoing migration effort is to architect the goal system
utilizing FHIR R4. Leveraging FHIR endpoints facilitates standardization,
enhances interoperability, and simplifies integration with exterior
methods, aligning the system with globally acknowledged healthcare knowledge
change requirements.

For the aim of this experiment, we’ll focus particularly on the
Therapies Show Management as a consultant candidate for
migration.

Determine 2: Legacy Therapies Show Management constructed utilizing
Angular and built-in with OpenMRS REST endpoints

The Therapy Particulars Management is a selected sort of show management
in Bahmni that focuses on presenting complete details about a
affected person’s prescriptions or drug orders over a configurable variety of
visits. This management is instrumental in offering clinicians with a
consolidated view of a affected person’s remedy historical past, aiding in knowledgeable
decision-making. It retrieves knowledge through a REST API, processing it right into a
view mannequin for UI rendering in a tabular format, supporting each present
and historic remedies. The management incorporates error dealing with, empty
state administration, and efficiency optimizations to make sure a sturdy and
environment friendly consumer expertise.

The info for this management is sourced from the
/openmrs/ws/relaxation/v1/bahmnicore/drugOrders/prescribedAndActive endpoint,
which returns visitDrugOrders. The visitDrugOrders array comprises
detailed entries that hyperlink drug orders to particular visits, together with
metadata in regards to the supplier, drug idea, and dosing directions. Every
drug order contains prescription particulars resembling drug identify, dosage,
frequency, length, administration route, begin and cease dates, and
commonplace code mappings (e.g., WHOATC, CIEL, SNOMED-CT, RxNORM).

Here’s a pattern JSON response from Bahmni’s
/bahmnicore/drugOrders/prescribedAndActive REST API endpoint containing
detailed details about a affected person’s drug orders throughout a selected
go to, together with metadata like drug identify, dosage, frequency, length,
route, and prescribing supplier.

{
  "visitDrugOrders": [
    {
      "visit": {
        "uuid": "3145cef3-abfa-4287-889d-c61154428429",
        "startDateTime": 1750033721000
      },
      "drugOrder": {
        "concept": {
          "uuid": "70116AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
          "name": "Acetaminophen",
          "dataType": "N/A",
          "shortName": "Acetaminophen",
          "units": null,
          "conceptClass": "Drug",
          "hiNormal": null,
          "lowNormal": null,
          "set": false,
          "mappings": [
            {
              "code": "70116",
              "name": null,
              "source": "CIEL"
            },y
            /* Response Truncated */
          ]
        },
        "directions": null,
        "uuid": "a8a2e7d6-50cf-4e3e-8693-98ff212eee1b",
present remainder of json
        "orderType": "Drug Order",
        "accessionNumber": null,
        "orderGroup": null,
        "dateCreated": null,
        "dateChanged": null,
        "dateStopped": null,
        "orderNumber": "ORD-1",
        "careSetting": "OUTPATIENT",
        "motion": "NEW",
        "commentToFulfiller": null,
        "autoExpireDate": 1750206569000,
        "urgency": null,
        "previousOrderUuid": null,
        "drug": {
          "identify": "Paracetamol 500 mg",
          "uuid": "e8265115-66d3-459c-852e-b9963b2e38eb",
          "kind": "Pill",
          "power": "500 mg"
        },
        "drugNonCoded": null,
        "dosingInstructionType": "org.openmrs.module.bahmniemrapi.drugorder.dosinginstructions.FlexibleDosingInstructions",
        "dosingInstructions": {
          "dose": 1.0,
          "doseUnits": "Pill",
          "route": "Oral",
          "frequency": "Twice a day",
          "asNeeded": false,
          "administrationInstructions": "{"directions":"As directed"}",
          "amount": 4.0,
          "quantityUnits": "Pill",
          "numberOfRefills": null
        },
        "dateActivated": 1750033770000,
        "scheduledDate": 1750033770000,
        "effectiveStartDate": 1750033770000,
        "effectiveStopDate": 1750206569000,
        "orderReasonText": null,
        "length": 2,
        "durationUnits": "Days",
        "voided": false,
        "voidReason": null,
        "orderReasonConcept": null,
        "sortWeight": null,
        "conceptUuid": "70116AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
      },
      "supplier": {
        "uuid": "d7a67c17-5e07-11ef-8f7c-0242ac120002",
        "identify": "Tremendous Man",
        "encounterRoleUuid": null
      },
      "orderAttributes": null,
      "retired": false,
      "encounterUuid": "fe91544a-4b6b-4bb0-88de-2f9669f86a25",
      "creatorName": "Tremendous Man",
      "orderReasonConcept": null,
      "orderReasonText": null,
      "dosingInstructionType": "org.openmrs.module.bahmniemrapi.drugorder.dosinginstructions.FlexibleDosingInstructions",
      "previousOrderUuid": null,
      "idea": {
        "uuid": "70116AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
        "identify": "Acetaminophen",
        "dataType": "N/A",
        "shortName": "Acetaminophen",
        "models": null,
        "conceptClass": "Drug",
        "hiNormal": null,
        "lowNormal": null,
        "set": false,
        "mappings": [
          {
            "code": "70116",
            "name": null,
            "source": "CIEL"
          },
          /* Response Truncated */
        ]
      },
      "sortWeight": null,
      "uuid": "a8a2e7d6-50cf-4e3e-8693-98ff212eee1b",
      "effectiveStartDate": 1750033770000,
      "effectiveStopDate": 1750206569000,
      "orderGroup": null,
      "autoExpireDate": 1750206569000,
      "scheduledDate": 1750033770000,
      "dateStopped": null,
      "directions": null,
      "dateActivated": 1750033770000,
      "commentToFulfiller": null,
      "orderNumber": "ORD-1",
      "careSetting": "OUTPATIENT",
      "orderType": "Drug Order",
      "drug": {
        "identify": "Paracetamol 500 mg",
        "uuid": "e8265115-66d3-459c-852e-b9963b2e38eb",
        "kind": "Pill",
        "power": "500 mg"
      },
      "dosingInstructions": {
        "dose": 1.0,
        "doseUnits": "Pill",
        "route": "Oral",
        "frequency": "Twice a day",
        "asNeeded": false,
        "administrationInstructions": "{"directions":"As directed"}",
        "amount": 4.0,
        "quantityUnits": "Pill",
        "numberOfRefills": null
      },
      "durationUnits": "Days",
      "drugNonCoded": null,
      "motion": "NEW",
      "length": 2
    }
  ]
}

The /bahmnicore/drugOrders/prescribedAndActive mannequin differs considerably
from the OpenMRS FHIR
MedicationRequest

mannequin in each construction and illustration. Whereas the Bahmni REST mannequin is
tailor-made for UI rendering with visit-context grouping and contains
OpenMRS-specific constructs like idea, drug, orderNumber, and versatile
dosing directions, the FHIR MedicationRequest mannequin adheres to worldwide
requirements with a normalized, reference-based construction utilizing sources resembling
Remedy, Encounter, Practitioner, and coded parts in
CodeableConcept and Timing.

Analysis

The “Analysis” part of the strategy includes producing an
MCP-augmented LLM evaluation of the chosen Show Management. This part is
centered round understanding the legacy system’s conduct by analyzing
its supply code and conducting reverse engineering. Such evaluation is
important for informing the ahead engineering efforts. Whereas not all
recognized necessities could also be carried ahead—notably in long-lived
methods the place sure functionalities could have grow to be out of date—it’s
vital to have a transparent understanding of present behaviors. This allows
groups to make knowledgeable choices about which parts to retain, discard,
or redesign within the goal system, guaranteeing that the modernization effort
aligns with present enterprise wants and technical targets.

At this stage, it’s useful to take a step again and take into account how human
builders sometimes strategy a migration of this nature. One key perception
is that migrating from Angular to React depends closely on contextual
understanding. Builders should draw upon numerous dimensions of data
to make sure a profitable and significant transition. The vital areas of
focus sometimes embrace:

  • Function Analysis: understanding the useful intent and position of the
    present Angular parts inside the broader software.
  • Knowledge Mannequin Evaluation: reviewing the underlying knowledge constructions and their
    relationships to evaluate compatibility with the brand new structure.
  • Knowledge Movement Mapping: tracing how knowledge strikes from backend APIs to the
    frontend UI to make sure continuity within the consumer expertise.
  • FHIR Mannequin Alignment: figuring out whether or not the present knowledge mannequin will be
    mapped to an HL7 FHIR-compatible construction, the place relevant.
  • Comparative Evaluation: evaluating structural and useful similarities,
    variations, and potential gaps between the outdated and goal implementations.
  • Efficiency Issues: taking into consideration areas for efficiency
    enhancement within the new system.
  • Function Relevance: assessing which options must be carried ahead,
    redesigned, or deprecated primarily based on present enterprise wants.

This context-driven evaluation is usually probably the most difficult facet of
any legacy migration. Importantly, modernization will not be merely about
changing outdated applied sciences—it’s about reimagining the way forward for the
system and the enterprise it helps. It includes the evolution of the
software throughout its whole lifecycle, together with its structure, knowledge
constructions, and consumer expertise.

The experience of subject material specialists (SMEs) and area specialists
is essential to know present conduct and to organize a information for the
migration. And what higher option to seize the anticipated conduct than
via well-defined check situations in opposition to which the migrated code will
be evaluated. Understanding what situations are to be examined is vital
not simply in ensuring that – the whole lot that used to work nonetheless works
and the brand new conduct would work as anticipated but in addition as a result of now your LLM
has a clearly outlined set of targets that it is aware of is what’s anticipated. By
defining these targets explicitly, we will make the LLM’s responses as
deterministic as attainable, avoiding the unpredictability of probabilistic
responses and guaranteeing extra dependable outcomes through the migration
course of.

Based mostly on this understanding, I developed a complete and
strategically structured immediate
designed to seize all related data successfully.

Whereas the immediate covers all anticipated areas—resembling knowledge move,
configuration, key features, and integration—it additionally contains a number of
sections that warrant particular point out:

  • FHIR Compatibility: this part maps the customized Bahmni knowledge mannequin
    to HL7 FHIR sources and highlights gaps, thereby supporting future
    interoperability efforts. Finishing this mapping requires a strong understanding
    of FHIR ideas and useful resource constructions, and generally is a time-consuming process. It
    sometimes includes a number of hours of detailed evaluation to make sure correct
    alignment, compatibility verification, and identification of divergences between
    the OpenMRS and FHIR remedy fashions, which might now be performed in a matter of
    seconds.
  • Testing Pointers for React + TypeScript Implementation Over OpenMRS
    FHIR: this part presents structured check situations that emphasize knowledge
    dealing with, rendering accuracy, and FHIR compliance for the modernized frontend
    parts. It serves as a superb basis for the event course of,
    setting out a compulsory set of standards that the LLM ought to fulfill whereas
    rebuilding the part.
  • Customization Choices: this outlines accessible extension factors and
    configuration mechanisms that improve maintainability and adaptableness throughout
    numerous implementation situations. Whereas a few of these choices are documented,
    the LLM-generated evaluation usually uncovers further customization paths
    embedded within the codebase. This helps establish legacy customization approaches
    extra successfully and ensures a extra exhaustive understanding of present
    capabilities.

To collect the required knowledge, I utilized two light-weight servers:

  • An Atlassian MCP server to extract any accessible documentation on the
    show management.
  • A filesystem MCP server, the place the legacy frontend code and configuration
    had been mounted, to offer supply code-level evaluation.

Determine 3: MCP + Cline + Claude Setup Diagram

Whereas elective, this filesystem server allowed me to concentrate on the goal
system’s code inside my IDE, with the legacy reference codebases conveniently
accessible via the mounted server.

These mild weight servers every expose particular capabilities via the
standardized Mannequin Context Protocol, which is then utilized by Cline (my consumer in
this case) to entry the code base, documentation and configuration. For the reason that
configurations shipped are opinionated and the paperwork usually outdated, I added
particular directions to take the supply code as the one supply of reality and
the remaining as a supplementary reference.

Assessment

The second part of the strategy —is the place the human within the loop
turns into invaluable.

The AI-generated evaluation is not meant to be accepted at face worth,
particularly for complicated codebases. You’ll nonetheless want a site skilled and an
architect to vet, contextualize, and information the migration course of. AI alone
is not going emigrate a whole mission seamlessly; it requires
considerate decomposition, clear boundaries, and iterative validation.

Not all these necessities will essentially be included into the
goal system, for instance the power to print a prescription sheet primarily based
on the medicines prescribed is deferred for now.

On this case, I augmented the evaluation with pattern responses from the
FHIR endpoint and whereas discarding features of the system that aren’t
related to the modernization effort. This contains efficiency
optimizations, check circumstances that aren’t immediately related to the migration,
and configuration choices such because the variety of rows to show and
whether or not to point out energetic or inactive medicines. I felt these will be
addressed as a part of the following iteration.

As an example, take into account the unit check situations outlined for rendering
remedy knowledge:

        ✅ Comfortable Path

        It ought to accurately render the drugName column.
        It ought to accurately render the standing column with the suitable Tag colour.
        It ought to accurately render the precedence column with the proper precedence Tag.
        It ought to accurately render the supplier column.
        It ought to accurately render the startDate column.
        It ought to accurately render the length column.
        It ought to accurately render the frequency column.
        It ought to accurately render the route column.
        It ought to accurately render the doseQuantity column.
        It ought to accurately render the instruction column.

        ❌ Unhappy Path

        It ought to present a “-” if startDate is lacking.
        It ought to present a “-” if frequency is lacking.
        It ought to present a “-” if route is lacking.
        It ought to present a “-” if doseQuantity is lacking.
        It ought to present a “-” if instruction is lacking.
        It ought to deal with circumstances the place the row knowledge is undefined or null.
      

Changing lacking values with “-” within the unhappy path situations has been eliminated,
because it doesn’t align with the necessities of the goal system. Such choices
must be guided by enter from the subject material specialists (SMEs) and
stakeholders, guaranteeing that solely performance related to the present enterprise
context is retained.

The literature gathered on the show management now must be coupled with
mission conventions, practices, and tips with out which the LLM is open to
interpret the above request, on the info that it was educated with. This contains
entry to features that may be reused, pattern knowledge fashions and companies and
reusable atomic parts that the LLMs can now depend on. If such practices,
type guides and tips are usually not clearly outlined, each iteration of the
migration dangers producing non-conforming code. Over time, this could contribute to
a fragmented codebase and an accumulation of technical debt.

The core goal is to outline clear, project-specific coding requirements and
type guides to make sure consistency within the generated code. These requirements act as
a foundational reference for the LLM, enabling it to supply output that aligns
with established conventions. For instance, the Google TypeScript Type Information can
be summarized and documented as a TypeScript type information saved within the goal
codebase. This file is then learn by Cline in the beginning of every session to make sure
that every one generated TypeScript code adheres to a constant and acknowledged
commonplace.

Rebuild

Rebuilding the function for a goal system with LLM-generated code is
the ultimate part of the workflow. Now with all of the required knowledge gathered,
we will get began with a easy immediate

You’re tasked with constructing a Therapy show management within the new react ts fhir frontend. Yow will discover the small print of the legacy Therapy show management implementation in docs/treatments-legacy-implementation.md. Create the brand new show management by following the docs/display-control-guide.md

At this stage, the LLM generates the preliminary code and check situations,
leveraging the knowledge offered. As soon as this output is produced, it’s
important for area specialists and builders to conduct an intensive code evaluation
and apply any vital refactoring to make sure alignment with mission requirements,
performance necessities, and long-term maintainability.

Refactoring the LLM-generated code is vital to making sure the code stays
clear and maintainable. With out correct refactoring, the consequence could possibly be a
disorganized assortment of code fragments fairly than a cohesive, environment friendly
system. Given the probabilistic nature of LLMs and the potential discrepancies
between the generated code and the unique goals, it’s important to
contain area specialists and SMEs at this stage. Their position is to totally
evaluation the code, validate that the output aligns with the preliminary expectations,
and assess whether or not the migration has been efficiently executed. This skilled
involvement is essential to make sure the standard, accuracy, and general success of
the migration course of.

This part must be approached as a complete code evaluation—much like
reviewing the work of a senior developer who possesses sturdy language and
framework experience however lacks familiarity with the precise mission context.
Whereas technical proficiency is important, constructing sturdy methods requires a
deeper understanding of domain-specific nuances, architectural choices, and
long-term maintainability. On this context, the human-in-the-loop performs a
pivotal position, bringing the contextual consciousness and system-level understanding
that automated instruments or LLMs could lack. It’s a essential course of to make sure that
the generated code integrates seamlessly with the broader system structure
and aligns with project-specific necessities.

In our case, the intent and context of the rebuild had been clearly outlined,
which minimized the necessity for post-review refactoring. The necessities gathered
through the analysis part—mixed with clearly articulated mission conventions,
expertise stack, coding requirements, and magnificence guides—ensured that the LLM had
minimal ambiguity when producing code. Because of this, there was little left for
the LLM to deduce independently.

That mentioned, any unresolved questions concerning the implementation plan can
result in deviations from the anticipated output. Whereas it’s not possible to
anticipate and reply each such query prematurely, you will need to
acknowledge the inevitability of “unknown unknowns.” That is exactly the place a
thorough evaluation turns into important.

On this specific occasion, my familiarity with the show management we had been
rebuilding allowed me to proactively decrease such unknowns. Nonetheless, this stage
of context could not at all times be accessible. Due to this fact, I strongly advocate
conducting an in depth code evaluation to assist uncover these hidden gaps. If
recurring points are recognized, the immediate can then be refined to handle them
preemptively in future iterations.

The attract of LLMs is plain; they provide a seemingly easy resolution
to complicated issues, and builders can usually create such an answer shortly and
without having years of deep coding expertise. This could not create a bias
within the specialists, succumbing to the attract of LLMs and finally take their arms
off the wheel.

Consequence

Determine 4: A excessive stage overview of the method; taking a function from the legacy codebase and utilizing LLM-assisted evaluation to rebuild it inside the goal system

In my case the code era course of took about 10 minutes to
full. The evaluation and implementation, together with each unit and
integration exams with roughly 95% protection, had been accomplished utilizing
Claude 3.5 Sonnet (20241022). The entire price for this effort was about
$2.

Determine 5: Legacy Therapies Show Management constructed utilizing Angular and built-in with OpenMRS REST endpoints

Determine 6: Modernized Therapies Show Management rebuilt
utilizing React and TypeScript, leveraging FHIR endpoints

With out AI help, each the technical evaluation and implementation
would have probably taken a developer a minimal of two to 3 days. In my
case, growing a reusable, general-purpose immediate—grounded within the shared
architectural ideas behind the roughly 30 show controls in
Bahmni—took about 5 centered iterations over 4 hours, at a barely
increased inference price of round $10 throughout these cycles. This effort was
important to make sure the generated immediate was modular and broadly
relevant, given that every show management in Bahmni is basically a
configurable, embeddable widget designed to reinforce system flexibility
throughout totally different medical dashboards.

Even with AI-assisted era, one of many key prices in growth
stays the time and cognitive load required to investigate, evaluation, and
validate the output. Because of my prior expertise with Bahmni, I used to be ready
to evaluation the generated evaluation in beneath quarter-hour, supplementing it
with fast parallel analysis to validate the claims and knowledge mappings. I
was pleasantly shocked by the standard of the evaluation: the info mannequin
mapping was exact, the logic for transformation was sound, and the check
case strategies lined a complete vary of situations, each typical
and edge circumstances.

Code evaluation, nevertheless, emerged as probably the most important problem.
Reviewing the generated code line by line throughout all modifications took me
roughly 20 minutes. Not like pairing with a human developer—the place
iterative discussions happen at a manageable tempo—working with an AI system
able to producing whole modules inside seconds creates a bottleneck
on the human facet, particularly when trying line-by-line scrutiny. This
isn’t a limitation of the AI itself, however fairly a mirrored image of human
evaluation capability. Whereas AI-assisted code reviewers are sometimes proposed as a
resolution, they will typically establish syntactic points, adherence to finest
practices, and potential anti-patterns—however they battle to evaluate intent,
which is vital in legacy migration initiatives. This intent, grounded in
area context and enterprise logic, should nonetheless be confirmed by the human in
the loop.

For a legacy modernization mission involving a migration from AngularJS
to React, I’d charge this expertise an absolute 10/10. This functionality
opens up the chance for any people with respectable technical
experience and robust area data emigrate any legacy codebase to a
fashionable stack with minimal effort and in considerably much less time.

I imagine that with a bottom-up strategy, breaking the issue down
into atomic parts, and clearly defining finest practices and
tips, AI-generated code might tremendously speed up supply
timelines—even for complicated brownfield initiatives as we noticed for Bahmni.

The preliminary evaluation and the following evaluation by specialists ends in a
crisp sufficient doc that lets us use the restricted house within the context
window in an environment friendly method so we will match extra data into one single
immediate. Successfully, this permits the LLM to investigate code in a method that’s
not restricted by how the code is organized within the first place by builders.
This additionally ends in decreasing the general price of utilizing LLMs, as a brute
drive strategy would imply that you just spend 10 occasions as a lot even for a a lot
easier mission.

Whereas modernizing the legacy codebase is the principle product of this
proposed strategy, it’s not the one worthwhile one. The documentation
generated in regards to the system is efficacious when offered not simply to the top
customers / implementers in complementing or filling gaps in present methods
documentation and likewise would stand in as a data base in regards to the system
for ahead engineering groups pairing with LLMs to reinforce or enrich
system capabilities.

Why the Assessment Part Issues

A key enabler of this profitable migration was a well-structured plan
and detailed scope evaluation part previous to implementation. This early
funding paid dividends through the code era part. And not using a
clear understanding of the info move, configuration construction, and
show logic, the AI would have struggled to supply coherent and
maintainable outputs. When you’ve got labored with AI earlier than, you’ll have
seen that it’s constantly desirous to generate output. In an earlier
try, I proceeded with out ample warning and skipped the evaluation
step—solely to find that the generated code included a useMemo hook
for an operation that was computationally trivial. One of many success
standards within the generated evaluation was that the code must be
performant, and this seemed to be the AI’s method of fulfilling that
requirement.

Apparently, the AI even added unit exams to validate the
efficiency of that particular operation. Nonetheless, none of this was
explicitly required. It arose solely as a consequence of a poorly outlined intent. AI
included these modifications with out hesitation, regardless of not totally
understanding the underlying necessities or searching for clarification.
Reviewing each the generated evaluation and the corresponding code ensures
that unintended additions are recognized early and that deviations from
the unique expectations are minimized.

Assessment additionally performs a key position in avoiding pointless back-and-forth
with the AI through the rebuild part. As an example, whereas refining the
immediate for the “Show Management Implementation
Information
”,
I initially didn’t have the part specifying the unit exams to be
included. Because of this, the AI generated a check that was largely
meaningless—providing a false sense of check protection with no actual
connection to the code beneath check.

Determine 7: AI generated unit check that verifies actuality
continues to be actual

In an try to repair this check, I started prompting
extensively—offering examples and detailed directions on how the unit
check must be structured. Nonetheless, the extra I prompted, the additional the
course of deviated from the unique goal of rebuilding the show
management. The main focus shifted totally to resolving unit check points, with
the AI even starting to evaluation unrelated exams within the codebase and
suggesting fixes for issues it recognized there.

Ultimately, realizing the rising divergence from the meant
process, I restarted the method with clearly outlined directions from the
outset, which proved to be far more practical.

This leads us to a vital perception: Do not Interrupt AI.

LLMs, at their core, are predictive sequence turbines that construct
narratives token by token. Whenever you interrupt a mannequin mid-stream to
course-correct, you break the logical move it was setting up.
Stanford’s “Misplaced within the
Center”

research revealed that fashions can undergo as much as a 20%
drop in accuracy when vital data is buried in the midst of
lengthy contexts, versus when it’s clearly framed upfront. This underscores
why beginning with a well-defined immediate and letting the AI full its
process unimpeded usually yields higher outcomes than fixed backtracking or
mid-flight corrections.

This concept can also be bolstered in “Why Human Intent Issues Extra as AI
Capabilities Develop” by Nick
Baumann
,
which argues that as mannequin capabilities scale, clear human intent—not
simply brute mannequin power—turns into the important thing to unlocking helpful output.
Reasonably than micromanaging each response, practitioners profit most by
designing clear, unambiguous setups and letting the AI full the arc
with out interruption.

Conclusion

It is very important make clear that this strategy will not be meant to be a
silver bullet able to executing a large-scale migration with out
oversight. Reasonably, its power lies in its potential to considerably
scale back growth time—doubtlessly by a number of weeks—whereas sustaining
high quality and management.

The purpose is not to interchange human experience however to amplify it—to
speed up supply timelines whereas guaranteeing that high quality and
maintainability are preserved, if not improved, through the transition.

It’s also vital to notice that the expertise and outcomes mentioned
to date are restricted to read-only controls. Extra complicated or interactive
parts could current further challenges that require additional
analysis and refinement of the prompts used.

One of many key insights from exploring GenAI for legacy migration is
that whereas massive language fashions (LLMs) excel at general-purpose duties and
predefined workflows, their true potential in large-scale enterprise
transformation is simply realized when guided by human experience. That is
effectively illustrated by Moravec’s Paradox, which observes that duties perceived
as intellectually complicated—resembling logical reasoning—are comparatively simpler
for AI, whereas duties requiring human instinct and contextual
understanding stay difficult. Within the context of legacy modernization,
this paradox reinforces the significance of subject material specialists (SMEs)
and area specialists, whose deep expertise, contextual understanding,
and instinct are indispensable. Their experience permits extra correct
interpretation of necessities, validation of AI-generated outputs, and
knowledgeable decision-making—finally guaranteeing that the transformation is
aligned with the group’s targets and constraints.

Whereas project-specific complexities could render this strategy bold,
I imagine that by adopting this structured workflow, AI-generated code can
considerably speed up supply timelines—even within the context of complicated
brownfield initiatives. The intent is to not change human experience, however to
increase it—streamlining growth whereas safeguarding, and doubtlessly
enhancing, code high quality and maintainability. Though the standard and
architectural soundness of the legacy system stay vital elements, this
methodology presents a robust place to begin. It reduces guide overhead,
creates ahead momentum, and lays the groundwork for cleaner and extra
maintainable implementations via expert-led, guided refactoring.

I firmly imagine following this workflow opens up the chance for
any people with respectable technical experience and robust area
data emigrate any legacy codebase to a contemporary stack with minimal
effort and in considerably much less time.


Tags: rebuildResearchReview
Admin

Admin

Next Post
Japanese metropolis proposes two-hour each day restrict on smartphones for all residents

Japanese metropolis proposes two-hour each day restrict on smartphones for all residents

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Legacy Utility Modernization for AI Clever Apps

Legacy Utility Modernization for AI Clever Apps

February 11, 2026
GitGuardian Raises $50M to Deal with AI Agent & Identification Safety

GitGuardian Raises $50M to Deal with AI Agent & Identification Safety

February 11, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved