Monthly Archives: April 2014

Four banking challenges that business intelligence solution solves.

Banking industry face four major challenges; vast range of customer and varied data related to their transactions, ongoing regulatory changes, increasing consolidation and competition. To address these challenges, banks employ BI solution that helps them to make better business decisions and to better target performance goals.

Let’s look at each of them,

Data explosion – Banks handle immense amount of information, it is hard to keep track of important information and understand which information is important. This information comes from varied sources and formats, it would be a challenging task to make sense of consumer needs, track trends, identify profitable areas and monitor consumer credit from this data.

Consolidation – A host of mergers and acquisitions has resulted in shift in corporate goals and increased focus on managing internal systems. This consolidation activity gives banks an opportunity to greatly reduce overhead costs by integration processes. Banks must identify areas to increase efficiency, cut costs and reduce redundancies.

Regulation – Continued regulatory changes like Basel 2 and SOX Act, requires banks to re-examine many of the operational processes. Banks must integrate the finance and risk segment to comply with these regulations. Banks need information analysis and reporting capabilities to comply with these regulations and manage risks.

Competition – Increased competition is making banks look for ways to differentiate by providing top-quality customer service that cater to individual needs. Expanding customer base results in increased diversity in customer preferences and behaviours. Increased customer diversity means growing consumer demands. Banks need to respond to these demands effectively to ensure they retain customers and gain new-ones.

Each of the above challenges requires banks to be proactive in managing and utilizing corporate data to stay ahead of the competition.

BI solution gives the capability to analyze vast amounts of information to make best business decision. It also allows them to tap into their huge database and deliver easy-to-comprehend insight to improve business performance and maintain regulatory compliance.

BI solution allows companies to easily integrate and cross-reference vast amounts of information from multiple sources, identify relationships among the information, and learn how different factors affect each other.

BI solution allows multiple users to manipulate the data to glean the most from the information that affects their decision making.

BI solution caters to many people in different locations with varied skill levels who need to use this information, everyone from executives who need high-level customized summary data with drill-down capabilities to power users who need to create and design custom reports.

To sum it up, BI solution helps banks to increase revenue while maintaining or reducing costs. Business intelligence software allows banking enterprises to analyze profit and loss, including product sales analysis, campaign management, market segment analysis, and risk analysis. Banks can grow revenue by maximizing customer value over the long term and improving customer acquisition and retention. At the same time, reduce costs by managing risk and preventing fraud, as well as improving operational efficiency.

How Cloud brought BI solutions to the ground and made them attractive for SMBs?

BI solutions when done properly help SMBs compete with medium and large enterprises by offering a level playing field whether in understanding who are its best customers? Which are the most profitable products or services? Which are the most efficient locations? How much it cost to launch a new product or a territory? Which marketing activity is offering the highest or the lowest return? It offers insights into the cost-of-acquiring a new customer and how those costs are related to ‘customer gain or loss. In short, it removes hunches and hindsight out of decision making and offers a logical data based answers.

[However, one of the recent studies reported, smaller the company, the less likely they are to use or plan to use BI solutions. While 33% of midsize businesses currently use and 28% plan to use BI solutions, among small businesses, just 16% currently use and 16 % plan to use BI solutions.]

So, what got changed in the “big boy’s crystal ball” that now makes it attractive for the small and medium business? the reason being BI are more affordable and easy to use, newer technologies such as open source, cloud, in-memory technology, Web 2.0 interfaces, and new visualization technology are making BI tools much more friendly to SMBs.

For the current discussion, let’s look at why SMBs are increasingly looking at Cloud and Saas based BI solutions to overcome their data mining problem.

Affordable and Easy to Use – A typical BI solution, even in the last few years, was priced way above what a small and medium business could afford, add to this the complication of deploying and managing it, just forget it. Then came the Cloud and SaaS based BI solutions, BIRST,  Indicee (Sales related BI), GoodData, Kognito, PivotLink… which “pulled the rug out from under” the BI majors, SMBs found these solutions easy to use and affordable, they could save on cost of deployment and managing servers and network connections, many of the solutions were so easy they could be deployed with or without a consultant. A typical cost per user of a Lite version could be as low as $25 and for Professional and Enterprise versions in the range of $75-90. Tibco Spotfire and Tableau Software, for example, offer no-cost and low-cost tools that let users develop and share easy-to-understand data visualizations. BIRST, Adaptive Insights, and PivotLink are among a handful of on-demand BI systems that can subscribe to online.

Don’t need a full time IT professional to manage it – any BI solution that requires a full-time resource would be unaffordable for a small and medium enterprise, self serve analytics which are part of most of the SaaS and Cloud based BI solution are most attractive options.

Mobile based workforce – with a major portion or even a part of SMB organization being mobile, they had little use for desktop or laptop based BI solutions, they looked for Cloud based mobile solutions which offered mobility at no extra cost.

The challenge for SMBs is acquiring BI software on slim technology budgets, then deploying and maintaining systems with limited IT support, Cloud and SaaS based BI solutions answer this call.

Let us hear from your experience, are you a SMB using a BI solution, let’s know how you are doing?

Your Big Data Is Worthless if You Don’t Bring It Into the Real World

“Many now seem convinced that the best way to make sense of our world is by sitting behind a screen analyzing the vast troves of information we call “big data.”

In a generation, the relationship between the “tech genius” and society has been transformed: from shut-in to savior, from antisocial to society’s best hope. Many now seem convinced that the best way to make sense of our world is by sitting behind a screen analyzing the vast troves of information we call “big data.”

Just look at Google Flu Trends. When it was launched in 2008 many in Silicon Valley touted it as yet another sign that big data would soon make conventional analytics obsolete.

But they were wrong.

Not only did Google Flu Trends largely fail to provide an accurate picture of the spread of influenza, it will never live up to the dreams of the big-data evangelists. Because big data is nothing without “thick data,” the rich and contextualized information you gather only by getting up from the computer and venturing out into the real world. Computer nerds were once ridiculed for their social ineptitude and told to “get out more.” The truth is, if big data’s biggest believers actually want to understand the world they are helping to shape, they really need to do just that.

It Is Not About Fixing the Algorithm

The dream of Google Flu Trends was that by identifying the words people tend to search for during flu season, and then tracking when those same words peaked in the real time, Google would be able alert us to new flu pandemics much faster than the official CDC statistics, which generally lag by about two weeks.

For many, Google Flu Trends became the poster child for the power of big data. In their best-selling book Big data: A Revolution That Will Transform How We Live, Work and Think, Viktor Mayer-Schönberger and Kenneth Cukier claimed that Google Flu Trends was “a more useful and timely indicator [of flu] than government statistics with their natural reporting lags.” Why even bother checking the actual statistics of people getting sick, when we know what correlates to sickness? “Causality,” they wrote, “won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning.”

But, as an article in Science earlier this month made clear, Google Flu Trends has systematically overestimated the prevalence of flu every single week since August 2011.

And back in 2009, shortly after launch, it completely missed the swine flu pandemic. It turns out, many of the words people search for during Flu season have nothing to do with Flu, and everything to do with the time of year flu season usually falls: winter.

Now, it is easy to argue – as many have done – that the failure of Google Flu Trends simply speaks to the immaturity of big data. But that misses the point. Sure, tweaking the algorithms, and improving data collection techniques will likely make the next generation of big data tools more effective. But the real big data hubris is not that we have too much confidence in a set of algorithms and methods that aren’t quite there yet. Rather, the issue is the blind belief that sitting behind a computer screen crunching numbers will ever be enough to understand the full extent of the world around us.

Why Big Data Needs Thick Data

Big data is really just a big collection of what people in the humanities would call thin data. Thin data is the sort of data you get when you look at the traces of our actions and behaviors. We travel this much every day; we search for that on the Internet; we sleep this many hours; we have so many connections; we listen to this type of music, and so forth. It’s the data gathered by the cookies in your browser, the FitBit on your wrist, or the GPS in your phone. These properties of human behavior are undoubtedly important, but they are not the whole story.

To really understand people, we must also understand the aspects of our experience — what anthropologists refer to as thick data. Thick data captures not just facts but the context of facts. Eighty-six percent of households in America drink more than six quarts of milk per week, for example, but why do they drink milk? And what is it like? A piece of fabric with stars and stripes in three colors is thin data. An American Flag blowing proudly in the wind is thick data.

Rather than seeking to understand us simply based on what we do as in the case of big data, thick data seeks to understand us in terms of how we relate to the many different worlds we inhabit. Only by understanding our worlds can anyone really understand “the world” as a whole, which is precisely what companies like Google and Facebook say they want to do.

Knowing the World Through Ones and Zeroes

Consider for a moment, the grandiosity of some of the claims being made in Silicon Valley right now. Google’s mission statement is famously to ”organize the world’s information and make it universally accessible and useful.” Mark Zuckerberg recently told investors that, along with prioritizing increased connectivity across the globe and emphasizing a knowledge economy, Facebook was committed to a new vision called “understanding the world.” He described what this “understanding” would soon look like: “Every day, people post billions of pieces of content and connections into the graph [Facebook’s algorithmic search mechanism] and in doing this, they’re helping to build the clearest model of everything there is to know in the world.” Even smaller companies share in the pursuit of understanding. Last year, Jeremiah Robison, the VP of Software at Jawbone, explained that the goal with their Fitness Tracking device Jawbone UP was “to understand the science of behavior change.”

These goals are as big as the data that is supposed to achieve them. And it is no wonder that businesses yearn for a better understanding of society. After all, information about customer behavior and culture at large is not only essential to making sure you stay relevant as a company, it is also increasingly a currency that in the knowledge economy can be traded for clicks, views, advertising dollars or simply, power. If in the process, businesses like Google and Facebook can contribute to growing our collective knowledge about of ourselves, all the more power to them. The issue is that by claiming that computers will ever organize all our data, or provide us with a full understanding of the flu, or fitness, or social connections, or anything else for that matter, they radically reduce what data and understanding means.

If the big data evangelists of Silicon Valley really want to “understand the world” they need to capture both its (big) quantities and its (thick) qualities. Unfortunately, gathering the latter requires that instead of just ‘seeing the world through Google Glass’ (or in the case of Facebook, Virtual Reality) they leave the computers behind and experience the world first hand. There are two key reasons why.

To Understand People, You Need to Understand Their Context

Thin data is most useful when you have a high degree of familiarity with an area, and thus have the ability to fill in the gaps and imagine why people might have behaved or reacted like they did — when you can imagine and reconstruct the context within which the observed behavior makes sense. Without knowing the context, it is impossible to infer any kind of causality and understand why people do what they do.

This is why, in scientific experiments, researchers go to great lengths to control the context of the laboratory environment –- to create an artificial place where all influences can be accounted for. But the real world is not a lab. The only way to make sure you understand the context of an unfamiliar world is to be physically present yourself to observe, internalize, and interpret everything that is going on.

Most of ‘the World’ Is Background Knowledge We Are Not Aware of

If big data excels at measuring actions, it fails at understanding people’s background knowledge of everyday things. How do I know how much toothpaste to use on my toothbrush, or when to merge into a traffic lane, or that a wink means “this is funny” and not “I have something stuck in my eye”? These are the internalized skills, automatic behaviors, and implicit understandings that govern most of what we do. It is a background of knowledge that is invisible to ourselves as well as those around us unless they are actively looking. Yet it has tremendous impact on why individuals behave as they do. It explains how things are relevant and meaningful to us.

The human and social sciences contain a large array of methods for capturing and making sense of people, their context, and their background knowledge, and they all have one thing in common: they require that the researchers immerse themselves in the messy reality of real life.

No single tool is likely to provide a silver bullet to human understanding. Despite the many wonderful innovations developed in Silicon Valley, there are limits to what we should expect from any digital technology. The real lesson of Google Flu Trends is that it simply isn’t enough to ask how ‘big’ the data is: we also need to ask how ‘thick’ it is.

Sometimes, it is just better to be there in real life. Sometimes, we have to leave the computer behind.



Smart thinking by airlines and airports

100% of airlines and 90% of airports are investing in business intelligence solutions to provide the intelligent information across their operations. This is according to Smart Thinking, released by SITA today at CAPA’s Airlines in Transition Summit in Dublin.

DUBLIN, Ireland – More than half of passengers would use their mobiles for flight status, baggage status and airport directions and by 2016 the majority of airlines and airports will offer these services. In total, 100% of airlines and 90% of airports are investing in business intelligence solutions to provide the intelligent information across their operations which these, and other services, demand. This is according to Smart Thinking, released by SITA today at CAPA’s Airlines in Transition Summit in Dublin.

SITA, the IT and communications provider to the air transport community, regularly conducts global research on airports, airlines and passengers. This provides the unique opportunity to look across the entire industry and identify alignment, misalignment, and potential for acceleration. SITA’s Smart Thinking is based on this global research and incorporates additional input from leading airlines and airports including British Airways, Saudia, Dublin Airport Authority, London City Airport and Heathrow.

According to SITA’s paper, flight status updates are already a mainstream mobile service and will extend to the vast majority of airlines and airports by the end of 2016. By then, what today are niche services will also be well established. Bag status updates will be offered by 61% of airlines; and 79% of airports will provide status notifications, such as queues times through security and walking time to gate. More than three quarters will also be providing navigation/way-finding at the airport via mobile apps.

Nigel Pickford, Director, Market Insight, SITA, said: “Our research has clearly shown that the move to smartphone apps and mobile services is well underway. But many of the services that airlines and airports are planning are heavily dependent on their ability to provide more meaningful data and insight – providing passengers and staff the right information at the right time. Efforts are being made across the industry to collaborate and SITA has established the Business Intelligence Maturity Index to benchmark the progress.”

Pickford continued: “We asked airlines and airports to measure themselves in four categories of business intelligence best practice for this index: Data Access and Management; Infrastructure; Data Presentation; and Governance. Our analysis shows that on average the industry is only halfway to achieving best-in-class and further progress is needed.”

There are ongoing efforts across the industry to establish data standards and ensure system compatibility. Pickford added: “Though the picture is not perfect now, change is coming. All airlines and 90% of airports are planning to make business intelligence investments in the coming three years. Both face the issue though that while passengers are very keen to access information about their journey, they are also sensitive about privacy. The smart use of non-intrusive passenger information however will provide benefits to airlines and passengers.”

SITA’s report describes how today the focus is on building the foundation for business intelligence but looking ahead the combination of business intelligence plus predictive analysis will help improve the passenger experience, while optimizing the use of infrastructure and space at airports. In the past, airlines and airports had no choice but to react when “irregular” events such as bad weather disrupted their finely-tuned schedules. Using business intelligence they will be more proactive by analyzing past events and combining live data feeds from multiple sources to predict future events and take preventative action before they occur. By making the transition from reactive to proactive to preventative there are significant benefits to be gained for passengers and the industry alike.

Hadoop expands data infrastructure, boosts business intelligence

The big data that companies successfully transform into usable business intelligence (BI) is just the tip of a massive data-iceberg, according to Jonathan Seidman, solutions architect at Cloudera.

The big data that companies successfully transform into usable business intelligence (BI) is just the tip of a massive data-iceberg, according to Jonathan Seidman, solutions architect at Cloudera. At Big Data Techcon 2014, Seidman hosted a session called “Extending your data infrastructure with Hadoop,” in which he explained how Hadoop could help the enterprise tap into that potential business intelligence below the water.  “That data that’s getting thrown away can have a lot of value but it can be very difficult to fit that data into your data warehouse,” Seidman explained.

The problem with big data is that there’s so much of it. Data centers simply don’t have the capacity to store it all. “Would you put a petabyte of data in your warehouse?” Seidman asked the audience. “It’s a good way to get fired,” a member shot back. For this reason, enterprises focus their energy on the data points that give a high return-on-byte, to use Seidman’s term.  That is, they capture and analyze the data that provides the most insight for the least amount of storage space. For example, a retailer would analyze the transactional dataset, focusing their attention on actual purchases. But Seidman pointed out that valuable data gets left out – behavioral, non-transactional data, in the retail example. “What if you don’t just want to know what the customer bought, but what they did on the site?” Seidman asked.

Enter Apache Hadoop, an open source framework designed to store and process large data-sets. Seidman described this technology as “scalable, fault tolerant and distributed.” With this framework, enterprises can load raw data into it and impose a schema onto the data, afterward. “This makes it easy for iterative, agile types of development,” Seidman said. He added that it made a good sandbox for more exploratory types of analysis.


The Tools That Power Business Intelligence

Ever-evolving analytic software can greatly improve financial institutions’ decision-making.

Business intelligence technology has come a long way from the decision support systems of the 1960s. Today, it can do much more than just mine, analyze and report on data — it can cross-analyze different data sets, forecast future behavior and greatly improve decision-making.

Tools continue to expand their capabilities, providing more value every year. The types of analysis they can perform today stretch the realm of what was possible even five years ago.

Technology Advances

The financial industry analyzes its vast store of data in several ways, and evolving BI tools aid in those tasks. Some of the capabilities that executives seek include:

Content analytics: Unstructured data (such as the content found in machine logs, sensor data, audio, video, call center logs, RSS feeds, social media posts and PowerPoint files) is growing more rapidly than any other type of data. Content analytics applies BI to this unstructured data.

By understanding more about the content and how it’s being used, enterprises can determine whether it’s valuable to the business. The content that is deemed valuable can be linked to other data to extrapolate additional insight, such as understanding the cause behind trends and events.

Context analytics: Effective decisions can’t be made without understanding the context of data, and that’s where context analytics comes in. It focuses on surrounding each data point with a historical context about people, places and things, and how each data point relates to other data points.

Business analytics: While traditional BI platforms include executive dashboards that provide key performance metrics, newer tools go further. Business analytics provides a deeper level of statistical and quantitative analysis, allowing financial services organizations to dive deeper to discover trends, relationships, patterns, behaviors and opportunities that are particularly difficult to discern.

Predictive analytics: Predictive analytics is a must-have for many financial services organizations, and for good reason. The process uses a variety of techniques, including statistical analysis, regression analysis, correlation analysis and cluster analysis, along with text mining, data mining and social media analytics, to learn from historical experience what to expect in a given area. Financial services firms can use the resulting models and patterns along with real-time data to develop proactive actions in areas such as loan approval determination and product development.

Cognitive analytics: This type of analytics employs artificial intelligence and machine learning algorithms to learn and build knowledge by experience in their domain, including terminology, processes and preferred methods of interaction. They process natural language and unstructured data and can help experts make better decisions.

Text analytics: This process transforms unstructured data such as email, text messages, web pages, social media, survey responses and charts into text. With this information translated into text, BI systems can better use the data to discover patterns, relationships and root causes.

Social media analytics: From Twitter and Facebook to LinkedIn, YouTube and blogs, it’s clear that social media is an information channel that can’t be ignored. Social media analytics gathers and analyzes data from sites like these in near real time, giving decision-makers access to extremely valuable information that provides insight into customer sentiment.

It also provides a way for financial services companies to quantify market perceptions, track the success of marketing campaigns and product launches, discover insights and trends in customer preferences, and react more quickly

Microsoft is set to release a patch for a “zero-day” vulnerability

By Mid of April 2014, Microsoft is set to release a patch for a “zero-day” vulnerability. When I asked some of my friends if they had heard the term “zero day”, a few of them said they had. When I asked them what the term referred to, they thought it meant “The number of days until you’re hacked”. Close, but not quite. It actually refers to lead time in an arms race.

One way that hackers can compromise your computer is by exploiting bugs in the programs you use every day. These bugs are called vulnerabilities by security geeks. Software companies are constantly testing their products looking for these bugs, and when they find one, a couple of things happen. First, the company starts working on a fix in the form of a software patch. The second thing is hackers start making malware and viruses to take advantage of the bug. In other words, once a vulnerability is found, an arms race starts. How much time does the company have to patch the hole? Can the company issue a fix before the hackers use it to attack you? Most companies don’t even reveal the weakness until they have the patch ready, but sometimes the sneaky bad guys find out.

Frequently, it is not the software company that finds the bug, but the hacker himself. Some hackers do nothing all day but look for vulnerabilities in popular software. When they find one, they can secretly start working right away on malicious code to take advantage of the bug. How can a software company create a patch if it doesn’t even know the vulnerability exists? In this case, how many days of lead time does a company have to create a fix? Zero! How much time does a user have to patch their system before being exposed to the malware? Zero!

Sometimes it seems like there are too many things to consider when thinking about the security of your home computer(s), but a relatively easy way to greatly improve your odds is simply keeping the applications you use up-to-date. Many programs provide automated tools to do this. For those that don’t, tune in next week for another tip.

Merck Optimizes Manufacturing With Big Data Analytics

Pharmaceutical firm uses Hadoop to crunch huge amounts of data so it can develop vaccines faster. One of eight profiles of InformationWeek Elite 100 Business Innovation Award winners.

 Producing pharmaceuticals of any kind is an expensive, highly regulated endeavor, but producing vaccines is particularly challenging.

Vaccines often contain attenuated viruses, meaning they’re altered so they give you immunity but not the actual disease, and thus they have to be handled under precise conditions during every step in the manufacturing process. Components might have to be stored at exactly -8 degrees for a year or more, and with even a slight variance from regulator-approved manufacturing processes, the materials have to be discarded.

“It might take three parts to get one part, and what we drop or discard amounts to hundreds of millions of dollars in lost revenue,” says George Llado, VP of information technology at Merck & Co.

In the summer of 2012, Llado was seeing higher-than-usual discard rates on certain vaccines. Llado’s team was looking into the causes of the low vaccine yield rates, but the usual investigative approach involved time-consuming spreadsheet-based analyses of data collected throughout the manufacturing process. Sources include process-historian systems on the shop floor that tag and track each batch. Maintenance systems detail plant equipment service dates and calibration settings. Building-management systems capture air pressure, temperature, and other readings in multiple locations at each plant, sampling by the minute.

Aligning all this data from disparate systems and spotting abnormalities took months using the spreadsheet-based approach, and storage and memory limits meant researchers could only look at a batch or two at a time. Jerry Megaro, Merck’s director of manufacturing advanced analytics and innovation, was determined to find a better way.

By early 2013, a Merck team was experimenting with a massively scalable distributed relational database. But when Llado and Megaro learned that Merck Research Laboratories (MRL) could provide their team with cloud-based Hadoop compute, they decided to change course.

Built on a Hortonworks Hadoop distribution running on Amazon Web Services, MRL’s Merck Data Science Platform turned out to be a better fit for the analysis because Hadoop supports a schema-on-read approach. As a result, data from 16 disparate sources could be used in analysis without having to be transformed with time-consuming and expensive ETL processes to conform to a rigid, predefined relational database schema.

“We took all of our data on one vaccine, whether from the labs or the process historians or the environmental systems, and just dropped it into a data lake,” says Llado.

Megaro’s team was then able to come up with conclusive answers about production yield variance within just three months. In the first month, July 2013, the team loaded the data onto a partition of the cloud-based platform, and it used MapReduce, Hive, and advanced dynamic time-warping techniques to aggregate and align the data sets around common metadata dimensions such as batch IDs, plant equipment IDs, and time stamps.

In the second month, analysts used R-based analytics to chart and cluster every batch of the vaccine ever made on a heat map. Spotting notable patterns, the team then used R to produce investigative histograms and scatter plots, and it drilled down with Hive to explore hypotheses about the factors tied to low-yield production runs. Using an Agile development approach, the team set up daily data-exploration goals, but it could change course by that afternoon if it failed to find solid data backing up a particular hypothesis. In the third month, the team developed models, testing against the trove of historical data to prove and disprove leading theories about yield factors.

Through 15 billion calculations and more than 5.5 million batch-to-batch comparisons, Merck discovered that certain characteristics in the fermentation phase of vaccine production were closely tied to yield in a final purification step. “That was pretty powerful, and we came up with a model that demonstrated, quantifiably, that specific fermentation performance traits are very important to yield,” says Megaro.

The good news is that these fermentation traits can be controlled, but Merck has to prove that in a test lab before IT can introduce any changes to its production environment. And if any process changes are deemed material, Merck will have to refile the vaccine’s manufacturing process with regulatory agencies.

With the case all but solved for one vaccine, Merck is applying the lessons learned to a variant of that product that is expected to be approved for sale as soon as this year. And drawing on both the manufacturing insights and the new big data analysis approach, Merck intends to optimize the production of other vaccines now in development. They’re all potentially lifesaving products, according to Merck, and it’s clear that the new data analysis approach marks a huge advance in ensuring efficient manufacturing and a more plentiful supply.

Magic Quadrant for Business Intelligence and Analytics Platforms, a Gartner report – 2014.

Market Definition/Description

The BI and analytics platform market is in the middle of an accelerated transformation from BI systems used primarily for measurement and reporting to those that also support analysis, prediction, forecasting and optimization. Because of the growing importance of advanced analytics for descriptive, prescriptive and predictive modeling, forecasting, simulation and optimization (see “Extend Your Portfolio of Analytics Capabilities”) in the BI and information management applications and infrastructure that companies are building — often with different buyers driving purchasing and different vendors offering solutions — this year Gartner has also published a Magic Quadrant exclusively on predictive and prescriptive analytics platforms (see Note 1). Vendors offering both sets of capabilities are featured in both Magic Quadrants.

The BI platform market is forecast to have grown into a $14.1 billion market in 2013, largely through companies investing in IT-led consolidation projects to standardize on IT-centric BI platforms for large-scale systems-of-record reporting (see “Forecast: Enterprise Software Markets, Worldwide, 2010-2017, 3Q13 Update”). These have tended to be highly governed and centralized, where IT production reports were pushed out to inform a broad array of information consumers and analysts. While analytical capabilities were deployed, such as parameterized reports, online analytical processing (OLAP) and ad hoc query, they were never fully embraced by the majority of business users, managers and analysts, primarily because most considered these too difficult to use for many analytical use cases. As a result, and continuing a five-year trend, these installed platforms are routinely being complemented, and in 2013 were increasingly displaced, in new sales situations by new investments, and requirements were more skewed toward business-user-driven data discovery techniques to make analytics beyond traditional reporting more accessible and pervasive to a broader range of users and use cases.

Also in support of wider adoption, companies and independent software vendors are increasingly embedding both traditional reporting, dashboards and interactive analysis, in addition to more advanced and prescriptive analytics built from statistical functions and algorithms available within the BI platform into business processes or applications. The intent is to expand the use of analytics to a broad range of consumers and nontraditional BI users, increasingly on mobile devices. Moreover, companies are increasingly building analytics applications, leveraging new data types and new types of analysis, such as location intelligence and analytics on multistructured data stored in NoSQL data repositories.