Skip to main content
← All posts
Tooling··6 min read

Extract-Based vs. Live-Connected: Two Approaches to Migration Security Tooling

When evaluating tools for the security workstream of an ERP migration, one of the most consequential architectural decisions is how the tool connects to your data. There are two fundamental approaches: live-connected tools that integrate directly with the source ERP system, and extract-based tools that work from exported data files. Each approach carries tradeoffs that affect deployment speed, operational risk, IT governance overhead, and team autonomy.

The Live-Connected Model

Live-connected tools establish a direct integration with the ERP system, typically through RFC connections for SAP environments or API endpoints for cloud platforms. They read role assignments, authorization objects, user master records, and usage logs directly from the source system in real time or on a scheduled refresh.

The advantage of this approach is data freshness. The tool always reflects the current state of the system, which is useful for platforms that serve an ongoing governance function beyond the migration. Changes made in the ERP system are automatically reflected in the tool's analysis.

The disadvantages are significant, particularly in a migration context. Establishing a live connection requires coordination with the basis or system administration team. It involves firewall rules, service accounts, authorization profiles for the integration user, and often a security review from IT governance. In large enterprises, this process can take weeks or months before the tool is operational.

There's also a risk dimension. Any tool with read access to a production ERP system is a potential attack surface. Security teams are justifiably cautious about granting system-level access to third-party tools, especially during a migration when the environment is already under elevated change risk.

The Extract-Based Model

Extract-based tools take a different approach. Instead of connecting to the ERP system directly, they ingest data files that have been exported from the system. A standard set of reports or queries produces CSV or Excel files containing user master data, role assignments, transaction usage logs, and authorization values. These files are uploaded to the tool, which performs its analysis on the extracted dataset.

The primary advantage is deployment speed. There's no integration to build, no firewall rules to configure, no IT governance review for system access. A consultant can export the required data, upload it to the tool, and begin analysis within the same day. For consulting engagements where a small team needs to be productive quickly, this is a meaningful advantage.

The second advantage is operational isolation. The tool never touches the production system. There's no service account to maintain, no connection to monitor, and no risk of the tool's activity affecting system performance. This makes it easier to gain approval from IT security teams and reduces the governance overhead throughout the engagement.

The tradeoff is that the data is a snapshot. If roles or assignments change in the source system after the extract is taken, the tool's analysis becomes stale. For most migration engagements, this is an acceptable limitation. The security workstream typically works from a baseline extract taken at the start of the mapping phase, and the snapshot is refreshed periodically if the timeline extends.

Why This Matters for Migration Teams

The deployment model choice has practical implications that go beyond the technical architecture.

For consulting firms and system integrators, time-to-value is critical. A migration engagement has a defined scope and timeline. Every week spent on tool deployment is a week not spent on the actual mapping work. Extract-based tools eliminate the deployment bottleneck entirely, which is why they tend to be favored in consulting-led engagements.

For enterprise IT teams running their own migrations, the governance considerations weigh more heavily. A live connection means ongoing oversight responsibility. An extract-based approach keeps the tool outside the system perimeter, which simplifies the security and compliance posture during an already complex project.

The size of the migration team also matters. A two-person consulting team deployed to a client site needs tools they can use independently without requiring sustained support from the client's IT organization. An extract-based tool fits this operating model. A live-connected tool would require the consultants to coordinate with the client's basis team for initial setup and ongoing support, adding friction to the engagement.

Data Sufficiency in Practice

A common objection to the extract-based approach is that exported data might be incomplete or insufficient for the analysis. In practice, the standard data required for role mapping, including user master records, role assignments, transaction usage statistics, and authorization values, is available through well-documented export procedures in every major ERP platform.

SAP provides standard transaction codes (SUIM, ST03N, AGR_1251) for extracting this data. Oracle EBS has equivalent responsibility and user access reports. PeopleSoft and JD Edwards have their own reporting mechanisms. The data needed for migration security analysis isn't locked behind proprietary APIs or undocumented system tables. It's the same data that organizations export for access reviews and audit preparation on a regular basis.

The key is having a clear specification of what data is needed, in what format, covering what time period. When the export requirements are well-defined, the extract process is straightforward and the resulting dataset is fully sufficient for persona derivation, role mapping, and SoD analysis.

Choosing the Right Model

For migration-specific use cases, where the goal is to map users from a source to a target system within a defined project window, the extract-based model offers the better tradeoff. It minimizes deployment overhead, eliminates integration risk, and enables fast time-to-value for the migration team.

For ongoing access governance use cases, where the goal is continuous monitoring and enforcement of access policies in a production environment, a live-connected model is more appropriate. The data freshness and automation capabilities justify the deployment complexity.

The two models serve different purposes and different phases of the access lifecycle. Recognizing which problem you're solving helps you choose the right tool for the job.

See Provisum in action

Automated persona mapping, real-time SOD analysis, and audit-ready documentation for your next ERP migration.

Request a demo