Data is the new currency. In 2016, the world has entered the “zettabyte era” as global IP traffic will reach 1.1 zettabytes, or over 1 trillion gigabytes. By 2020, global IP traffic will reach 2.3 zettabytes. This data growth is fuelling new industries and economies, and stimulating innovation within organisations.
It is through the exponential rise of smartphones, tablets and other mobile devices globally and the massive increase in the amount of connected devices and data being gathered. Another key trend, the Internet of Things, is also contributing to the increase in the volume of data being generated as the number of connected devices are forecasted to be between 40 billion (ABI Research) and 50 billion (Cisco) by 2020. With all this data, how are businesses going to extract useful information and business intelligence to impact on decisions and strategy?
“The key to unlocking meaning from data is to apply context – this is what turns raw data into information. Take the numbers 22 and 23. In isolation, they have no meaning. They’re just data. But add the context of time and unit of measure to indicate that the numbers were temperature measurements for two days in January – and the data becomes information,” said Rian Durandt, who heads up the Business Intelligence and Data Analytics division at Digiterra South Africa. In an era obsessed with the volume, variety and velocity of data, much can be gained by considering its context. Unfortunately, this aspect of business intelligence systems is marginalised in favour of technology.
By performing business processes (marketing, ordering, manufacturing, selling and delivering) data is consumed and generated. Some structured, some unstructured, some frequent, some in large volumes. In isolation, these data sets do not realise their full potential, and so the need to integrate, consolidate and aggregate data is fulfilled by designing and constructing data warehouses.
“But is data in a data warehouse inherently useful? Following methodologies made famous by the pioneers of data warehousing, these data sets became organised into subject areas, or purposes of use that were typically organised by department or user or function,” he said. “At the time, it made sense to divide this information into bite-sized sets that could be managed and measured – thus dividing a big problem into smaller ones. “Unfortunately, by focusing on the technology aspects of data management, the relationship between the business process and the data becomes misaligned. The business process context of the data diminishes.”
Durandt explained that the solution is simple: data should be mapped to the process (aligned to the strategy) – not the other way around. “First strategy – then process – then data. Rather than organising data into traditional datasets (such as source, subject, department or user), it should be organised by process. Following a process-oriented design for a business intelligence solution, bite-sized engagements can not only yield measurable and meaningful use of an organisation’s data, but also point toward gaps.”
Durandt concluded by saying that by mapping the data to the business process, missing data may be highlighted and can be prioritised for generation or consumption in a business case. Big data qualifiers such as volume, variety and velocity can be quantifiably managed when considered against the business objectives and processes.
Data Analytics Courses
As the volume of data has increased exponentially, and data analytics is more complex, there is a shortage of Big Data analytic skills globally. Our universities are starting to introduce postgraduate Business Analytics and Big Data courses, but for an intensive course on data analysis, see BMGI’s DataMaster Course or contact Irma Karsten for more information.