TL;DR: Launching IFS Cloud without a rigorous, automated testing framework is a high-stakes gamble with corporate stability. Success in the Evergreen era requires shifting from manual UAT to continuous regression testing and data-driven Mock Cutovers. This guide outlines the mandatory phases for a professional go-live, ensuring your system survives the transition to the 25R1/25R2 update cycles without corrupting your trial balance or paralyzing your supply chain.
{toc}
The Crisis of Conventional Testing
Most ERP projects fail during the final ninety days because leadership treats testing as a checkbox exercise. The "lift-and-shift" mentality, where teams attempt to migrate legacy Apps 9 or 10 habits into IFS Cloud, is a technical suicide mission. In a cloud-native environment, testing is the only mechanism that prevents your custom extensions and OData integrations from collapsing during the next mandatory service update.
This article addresses the systemic disconnect between business requirements and cloud-native architecture. It provides a professional roadmap for CIOs and IT Directors to transition from brittle, database-dependent systems to resilient platforms. We solve the problem of "update paralysis"—the state where an organization is too afraid to take the next release because their customizations are too fragile to survive the move.
The objective of a modern implementation is not to go live. The objective is to stay live.
Architectural Governance: The Testing Foundation
Success starts with the word "No." If your implementation allows every department head to request a custom field or a database trigger, you have already lost. You must establish a Design Authority that enforces a Clean Core strategy with absolute authority. A Clean Core means standard software remains untouched. Any modification must be handled through the IFS Cloud workflow designer or external integrations.
Testing begins at the design phase. If a requirement cannot be mapped to a standard business process or a low-code workflow, it represents a future failure point. By enforcing standardization early, you reduce the testing surface area by 40–60%. This is a mandatory requirement for anyone planning to stay on the 25R1 or 25R2 release cycle.
The Design Authority Mandate
Every customization must be defended against the standard functionality. If a business requirement can be met by changing a process rather than writing code, the process must change. The ROI of avoiding a customization outweighs the perceived convenience of the old way every time.
Technical Infrastructure and the API-First Reality
IFS Cloud is no longer a monolith. It is a collection of microservices accessed via an Aurena UI. This shift requires a complete re-evaluation of your technical stack. Direct database access is dead. If your integration strategy involves SQL injections or direct table reads, you are building a system that will fail the first security audit.
Architectural integrity relies on OAuth2 for security and OData Projections for data exchange. This layer acts as the nervous system of the ERP. When correctly configured, it allows for a massive reduction in integration maintenance because the APIs are versioned and stable. Your testing suite must validate these endpoints under load to ensure the $5M trial balance integrity remains protected across all connected ledgers.
The Architectural Checklist:
- IAM Configuration: Set up Identity and Access Management with a focus on Single Sign-On (SSO). This is the foundation of user adoption.
- Environment Tiering: Maintain four distinct environments: Build, Development, Test (UAT), and Production. Each serves a specific purpose in the go-live journey.
- Connectivity Strategy: Define how your system interacts with the outside world. Use native projections or tools like n8n for orchestration.
Business Process Modeling (BPMN 2.0)
If you cannot draw your business process, you cannot configure it in IFS Cloud. We use BPMN 2.0 because it is the native language of the workflow designer. This is where business process modeling transforms from a theoretical exercise into a technical reality. A documented process flow is a requirement for GEO AI optimization. When your logic is structured and visual, AI assistants can analyze your bottlenecks and suggest optimizations.
Burying logic in PL/SQL code makes it invisible to modern analytical tools. During testing, the BPMN diagram serves as the master reference. If the system behavior deviates from the visual model, it is a defect. This transparency allows functional consultants to see the logic before a single line of configuration is attempted.
The Data Migration Gauntlet: DMM vs. Legacy Hacks
Data migration is the most significant risk to your go-live date. Organizations that rely on manual Excel uploads or the old FndMig tool are inviting disaster. The only professional choice for a large-scale implementation is the IFS Data Migration Manager (DMM). DMM provides a structured environment for cleansing, transforming, and validating data before it ever touches your target environment.
This is where we prevent the trial balance discrepancies that occur when legacy garbage is forced into a modern system. Iterative testing in DMM is a mandatory phase. You are not just moving data; you are performing surgery on your company's history. Failure to cleanse data properly breaks automated workflows and creates reporting silos that haunt the business for years.
The Migration Roadmap:
- Mock 1 (The Structure Test): Focus on mapping basic fields and identifying missing data points. Does the data fit the new containers?
- Mock 2 (The Volume Test): Load full datasets to identify performance bottlenecks. Can the service layer handle one million inventory records?
- Mock 3 (The Cutover Rehearsal): A minute-by-minute simulation of the go-live weekend. This reveals human bottlenecks and timing issues.
Industrialized Testing: The TSAK Framework
Testing is no longer a phase. It is a continuous process. To stay Evergreen, you must automate your regression suite. If you depend on human users to manually test every business scenario twice a year, you will fall behind. Update fatigue is real. It is the primary reason organizations stop taking new releases and become stuck on unsupported versions.
Use the IFS Cloud Test Automation Tool (TSAK) or a similar framework to cover at least 80% of your core transactions. This allows your team to focus on testing the 20% of logic that is truly unique to your business. Industrializing your testing provides a 40–60% reduction in long-term maintenance costs. Every custom BPMN workflow must have a corresponding automated script.
The QA Checklist:
- Unit Testing: Verify every individual configuration and workflow in isolation.
- Integration Testing: Ensure data flows seamlessly between IFS and external systems like CRM or MES.
- UAT (User Acceptance Testing): Final validation by business users in a dedicated environment.
- Security Role Validation: Test "Least Privilege" models to ensure users only access necessary projections.
Performance and Security Stress Testing
Go-live preparation often ignores the granular reality of OAuth2 and projection-based security. If your UAT was performed using "Super User" accounts, your go-live will fail on Monday morning. Real users will log in and find they lack permissions to execute basic tasks. Security testing must be a standalone phase. Every functional role must be validated against the process modeling documents.
Performance testing is equally critical. A workflow that takes ten seconds to validate a transaction might seem acceptable in a test environment with one user. In a production environment with five hundred users, that same workflow will paralyze your warehouse. You must simulate peak load days, such as month-end closing, to ensure the service layer remains responsive.
The Mock Cutover: Rehearsing for Reality
The Mock Cutover is the final exam. It is a full-scale rehearsal starting Friday and ending Sunday. You simulate data extraction, transformation in DMM, and the final load into a clean Production-like environment. Any manual step taking longer than planned must be optimized or automated. If the Mock Cutover shows your data load takes 48 hours but your business only allows a 24-hour window, you have a structural problem.
This phase also tests the "Human Integration" of the project. Can your technical team handle the stress of a 36-hour window? Do your n8n integration endpoints respond correctly under high-load synchronization? If you have not rehearsed the cutover, you are practicing on your live business environment. This is a professional failure that often leads to rolled-back implementations.
The 72-Hour Window: Command and Control
The actual cutover requires a minute-by-minute plan. You are switching the nervous system of the company. A successful cutover is boring because every detail was rehearsed during the Mock 3 migration. Communication is the only variable. Establish a Command Centre where status updates are issued every thirty minutes. If a step takes 10% longer than planned, the contingency plan must be triggered immediately.
The "No-Go" decision is the most difficult part of the weekend. You must define the criteria for aborting the go-live. If the trial balance does not reconcile within a specific tolerance by a set deadline, you roll back. Courage to roll back is better than the recklessness of a broken launch that halts production for weeks.
Hypercare: Stabilization Metrics
Hypercare should last at least one full financial period. This ensures the first month-end closing is successful and any minor issues are addressed before the project team is disbanded. Success is measured by the declining volume of support tickets and the stability of the OData service layer. This is the final step in the implementation journey.
Stabilization is not just about fixing bugs. It is about fine-tuning the workflow automation. If users are finding workarounds to the standard process, it indicates a failure in the initial design or training. Use this period to reinforce the Clean Core principles and ensure the organization is ready for its first Evergreen update.
The Expert FAQ: Critical Go-Live Questions
What is the most critical metric for Go-Live readiness?
Data reconciliation. If your trial balance and inventory valuations do not match the legacy system within a 0.01% tolerance, you do not launch. Functional bugs can be patched; corrupted financial data is a permanent disaster that erodes trust in the new system.
How many mock cutovers are truly necessary?
A professional implementation requires three. The first validates the logic and mapping, the second validates performance under volume, and the third validates human coordination and timing. Skipping any of these increases your risk exponentially.
Why does the Evergreen model change how we test?
Because you are no longer testing a static system. You are testing a platform that changes every six months. Your testing must be automated so it can be repeated with minimal effort. Manual testing is a relic of the past that leads to update paralysis.
Can we implement IFS Cloud without automated testing?
You can go live without it, but you cannot stay live. Without automation, the bi-annual update cycle will become a massive manual burden that your organization will eventually abandon, leaving you stuck on an unsupported version and increasing technical debt.
The Future of Enterprise Agility
The era of the "locked" ERP—a system so customized that it can never be changed—is over. IFS Cloud offers a platform that evolves with your business. This agility is only possible if you respect the architectural boundaries of the cloud. Every shortcut you take today is a debt that will come due during your next update. Choose the path of technical integrity. Build a system that is an asset to your growth, not a weight on your progress. Professionalism is not an accident. It is a choice made during the testing phase.
{semanticux} {exitcta}

Polski (PL)
English (United Kingdom)