Data is not a neutral, interchangeable Lego block. You don’t just pick it out of the box and use it for what you want. Data comes in different formats, is stored and accessed differently, and can be of varying degrees of quality. Something as small as a misplaced comma or an extra space can have an out-of-proportion impact on how you can use your data. Plus, data is not immune to human error and creativity, leading to all sorts of “interesting” inconsistencies and errors.
Yet, too often we still see people treating the data part of any big real estate implementation or migration as an afterthought. Something a junior member of the IT department can sweep up at the end of the project just before go-live.
While this may have been an option in the past, today the data aspect of implementations and migrations needs to be carefully considered upfront and budgeted for. This ensures that your data will do what you need it to and that you have a fast and accurate migration or implementation with minimal downtime and reduced risk. Ultimately, you’ll be set up to achieve a lower total cost and higher ROI from the overall project.
Ingredient 1: Automation is essential
Today the go-live process of any data-based project needs to be very quick, even if the project itself has spanned several months. In our real estate data migration projects, we plan for cutovers to take place over weekends for minimal business disruption and increased data integrity. But this is only possible if the data migration is automated and error-free.
This applies equally to big migrations such as the implementation of new systems, and to smaller, ongoing data migrations where data needs to move between different real estate services on a regular basis at set intervals. Getting data migrated on time and accurately is paramount. If your data is out of date by the time you are done, it’s pointless.
This means that today we need to be smarter about how we migrate data and that automation is the only viable option to reduce human error, increase the speed of data migrations and scale into the future.
To be clear, this is not due to a deficiency in your source or destination systems. This is a challenge faced across all industries because all sectors produce and rely on more data than ever before. The sheer scale of the data involves means that manual approaches, even if outsourced, are not feasible.
Ingredient 2: Data needs to be standardised
Unlike Lego, you can’t just grab a chunk of data and wedge it into a new system, even if you have automated your data flows. The double-edged sword of using best-of-breed services is that they each do things in a specific way. This results in a mix of data in various formats that need to translate into the target system’s format. You end up trying to finish your Lego model with Stickle Bricks and some Meccano pieces thrown in for good measure.
A solution could be to do one-to-one translations to get data from one system to the other. But with the amount of data and systems growing, these one-to-one transformations soon start to increase exponentially, along with the requirement to manage and maintain them.
Our approach has been to create a standard data model – a data Esperanto of sorts – to create a common ground that all data is transformed to and from. Think of it as a hub and spoke network instead of a cat’s cradle of unique and increasingly unmanageable relationships. Data from multiple sources is aligned with one standard structure. This makes it easy to spot data issues or gaps and then convert the data to the target system’s formats with one click, ready to load. Finally, it is then possible to report on the source data automatically and immediately.
Ingredient 3: Don’t DIY, hire a great chef instead
Automation and standardisation are two important ingredients in successful data migrations and implementations, but the third really unlocks the potential. Essentially, don’t do it yourself.
Yes, all businesses are data businesses today, and data is one of your most valuable assets. But don’t overburden your already stretched internal teams with this as well. Rather, call in a specialist who has gained experience working with other systems and clients and has the know-how to understand what you need and implement it on your behalf. This ensures a faster, smoother, successful data migration while your teams focus on service delivery.
The result: a successful, modern real estate data migration
I remember a recent project in North America for a CRE company where, using our integration tools we mapped source data into our data model, and automatically generated import files for Yardi Voyager. It took about 20 minutes to load the lease information, compared to the day or two it would have taken our client to do manually, and we eliminated any risk of human error.
Imagine one of your employees spending two or three days a month typing in lease data, probably bored out of their mind, as well as unavailable to work on tasks that are more profitable and strategic. That adds up to a month every year that the person spends on something boring and repetitive that a machine could do faster and more accurately.
Today, bad data quality means money for any business. It’s a real cost. But the combination of automation, standardisation and the experience and know-how to flex and meet specific client requirements can unlock structure and profit in an infinitely unstructured world.