top of page
  • Lucy Robson

Data Migration explained ...


Data migration is the process of moving data from one system to another, usually as part of a larger project such as a system upgrade, merger, or consolidation. Data migration can be challenging and complex, as it involves not only transferring the data, but also ensuring its quality, integrity, and compatibility with the new system. In this blog post, we will explore some of the reasons why an organisation would migrate data, the different types of data that need to be considered, and the best practices for each stage of the extract-transform-load (ETL) process. Why migrate data? There are many reasons why an organisation would migrate data, such as:

- To improve performance, efficiency, and security of the data and the systems that use it

- To comply with regulatory or industry standards or requirements

- To enable new features or functionalities that are not supported by the old system

- To consolidate or integrate data from multiple sources or systems

- To reduce costs or complexity of maintaining or operating the old system

- To retire or decommission the old system Data types and speed of change Before migrating data, it is important to understand the different types of data that exist in an organisation and how they change over time. This will help determine when and how to migrate them. The main types of data are: - Configuration data: This is the data that defines how a system operates, such as settings, parameters, rules, etc. Configuration data is usually static or slow moving, meaning it does not change frequently or significantly.

- Sub-master data: This is the data that describes the entities or objects that are used by a system, such as customers, products, suppliers, etc. Sub-master data can change occasionally or regularly.

- Master data: This is the data that provides a single source of truth for the sub-master data across multiple systems. Master data ensures consistency and accuracy of the sub-master data. Master data is usually slow moving or fast moving.

- Transactional data: This is the data that records the activities or events that occur in a system, such as orders, invoices, payments, etc. Transactional data is usually fast moving or snapshot/point in time, meaning it changes constantly or at a specific moment. The speed of change of the data will drive when it can be migrated. Typically, slow moving and static data can be migrated before a cutover but fast moving, snapshot data has to be moved during a cutover period when the legacy system stops transacting and before the new one goes live. The ETL process The ETL process is the core of any data migration project. It consists of three main stages: extract, transform, and load. Extract The extract stage involves extracting the data from the source system(s) and preparing it for transformation. Considerations for this stage are: - Where is the data going? Is it operational data or historic? Operational data is the data that is needed for the new system to function properly. Historic data is the data that is not needed for the new system but may be required for reporting or analysis purposes.

- What is the potential data mapping from old to new? Data mapping is the process of identifying how the source data fields correspond to the target data fields. Data mapping helps ensure that the data is compatible and meaningful in the new system.

- How will the data be tested? Testing is a crucial step to verify that the extracted data is complete, accurate, and consistent. Testing should involve both automated and manual checks and validations.

- How to make the extract consistently repeatable? Repeatability is essential for ensuring that the same results are obtained every time the extract is performed. Repeatability can be achieved by using scripts, tools, or procedures that automate and standardise the extract process. Transform The transform stage involves transforming the extracted data to meet the requirements and specifications of the target system(s). Some of the considerations for this stage are: - What tool will be used to transform the data? There are various tools available for transforming data, such as ETL tools, scripting languages, databases, etc. The choice of tool depends on factors such as complexity, volume, performance, cost, etc.

- How will the data be cleansed, conformed, mastered, etc.? Data quality is a key aspect of any data migration project. Data quality activities include cleansing (removing errors or inconsistencies), conforming (standardising formats or values), mastering (resolving duplicates or conflicts), etc.

- How to make the transformation consistently repeatable? As with the extract stage, repeatability is vital for ensuring that the same results are obtained every time the transformation is performed. Repeatability can be achieved by using scripts, tools, or procedures that automate and standardise the transformation process. Load The load stage involves loading the transformed data into the target system(s) and verifying its correctness and completeness. Some of the considerations for this stage are: - What are the destination system(s) data requirements? The destination system(s) may have specific data requirements, such as formats, types, lengths, constraints, etc. The loaded data should comply with these requirements to avoid errors or issues.

- How will the data migration be tested? Testing is a critical step to ensure that the loaded data is accurate, consistent, and functional in the target system(s). Testing should involve both automated and manual checks and validations, as well as user acceptance testing (UAT).

- How to make the load consistently repeatable? As with the previous stages, repeatability is important for ensuring that the same results are obtained every time the load is performed. Repeatability can be achieved by using scripts, tools, or procedures that automate and standardise the load process. Testing and governance Testing and governance are two cross-cutting aspects of any data migration project. They involve ensuring that the data migration is performed correctly, efficiently, and securely. Testing is dependent on being able to repeat the migrations consistently and as such, minimising manual effort in the ETL process and automating as much of it as possible is desirable. Governance is the process of managing and controlling the data migration project. It involves defining roles and responsibilities, setting objectives and milestones, monitoring progress and performance, resolving issues and risks, etc. A data migration project will typically run in parallel to an implementation, with key milestones driving the timescale of the migration. Migrated data must be validated in full during user acceptance testing. Migrated data should be treated in the same fashion as a bespoke code modification or a specific configuration as badly migrated data has the potential to jeopardise a project. Conclusion Data migration is a complex and challenging process that requires careful planning, execution, and monitoring. By understanding the reasons, types, and speed of change of the data, and by following the best practices for each stage of the ETL process, as well as testing and governance, an organisation can successfully migrate its data from one system to another.


Thanks for reading! I hope you found this helpful.


click here to contact us now to find out more about we could help with any further questions or book a demo with one of our consultants.


65 views0 comments

Comments


bottom of page