Flexibility. Speed. Volume. Big Data has become a must-have marketing and technological reality : simply because electronic data repositories have grown enormously over 50 years of business process computerisation.
However, though we’re quite right to talk about information as a new “ resource ”, like oil, in order to profit from this modern-day Eldorado, it’s essential to master the exploration and extraction of such a potential in our data, the processing of relevant information and the refining of useful knowledge.
Back to (Data) basics
Beyond the technological heroism to which they’ve accustomed us, IT departments have pushed us from the IT system age into the information service era.
An integrated “ must-have ”, like electronics and automobiles, IT is now devoted to the uses of the information that it produces.
Today, the Big Data phenomenon confronts IT departments with a new challenge : managing “ data ” value. And though the purpose remains the same – analysing data in order to produce information – a Big Data project doesn’t simply involve creating a Big Data Warehouse.
A Big Data information service must be dynamic via its continuous inflows, intelligent via its vector data model, responsive via the production of contextual information in real-time, and collaborative via its In-Memory embedded analytical features.
There are even more technological modules involved in a Big Data project than in the construction of a classic Business Intelligence information service, even though the features on offer are generally the same : operational data collection, integration of information necessary for functional analyses and production of knowledge for process optimisation.
La plus grande complexité technologique d’un service d’information Big Data est induite par la nécessité de bénéficier d’un ordre de grandeur supplémentaire de performance dans tous les registres de fonctionnalités : alimentation en continu, stockage illimité, analyse en temps réel.
Par exemple, ne serait-ce qu’au niveau du stockage des données, qui sont par définition diverses, partitionnées et réparties dans un système Big Data, les techniques à orchestrer seront plus nombreuses.
A Big Data information service must be dynamic, responsive and collaborative
Today, the companies using Big Data are search engines like Yahoo ! and Google, social networks like Facebook, Twitter and LinkedIn and e-commerce sites like eBay and Amazon, the latter also using Cloud Computing for its services.
In brief, global, transatlantic companies, which were into the Internet from the very start : i.e. « pure players ». However, it would be premature to conclude that Big Data is only for large companies and administrative authorities and that they’re the only ones that can create added value by using it.
Clearly, just like a motorist uses motorways without having built them, commercial and public organisations, whatever their size, and even individuals can benefit from using Big Data information services, at a cost, of course.
With mass data processing, a new raw material emerged and information is growing, imposed by market forces. Its commercialisation, currently considered as was the case for energy, promises to be a goldmine for the people that control its production chain from A to Z (Business to Business and Business to Consumer).
Accordingly, more than a new value-added service built on a relatively classic marketing model, a Big Data information service must be based on a new equation that could be summed up as follows :
Consumer to Consumer = Consumer to Business + Business to Consumer