Monthly Archives: March 2014

Oil Companies Uses Spotfire to Drill Down to Real Results

When planning the course of your business, strategic decisions require more than talent and intuition. These days, titans of industry require solid data to keep the fires stoked and the wheels turning. Calculated decisions are the real means to success, and the faster you make them, the better. But with today’s massive information volumes and unrelenting speed, organizations need a modern way of extracting valuable insight from their data—that’s where TIBCO Spotfire comes in.
Fast Insights Drive Results

Forest Oil, a premiere drilling and exploration company, understands this all too well. In the oil and gas industry, where speed and foresight can mean the difference between boom and bust, keeping costs down and production high is critical. Now, with Spotfire, Forest Oil can monitor and analyze every nugget of data originating from its employees and wells, all the way down to the predicted vs. actual productivity of specific oil fields—all at high speed.

Remote Collaboration

Spotfire also delivers innovative mobile capabilities to keep remote employees connected. Let’s face it, oil and natural gas deposits are rarely found near the office water cooler at corporate headquarters. Fast access to the data you need in the palm of your hand, wherever you may be, is essential to gaining and sustaining competitive advantage. Employees are alerted to issues in real time; when an oil well forecast and production differ by 10 percent, the right employees receive an alert, permitting them to act before it’s too late.

Simplicity is Key

Did I mention Spotfire’s stunning analytic visualizations? Well, they’re included too and make it simple to quickly develop insights by presenting data in a logical, intuitive manner. Contrary to popular belief, you don’t have to be the reincarnation of Einstein to effectively use cutting-edge BI analytic tools, though the tangible results may indicate otherwise. By connecting personnel to the information they need to successfully do their jobs, analytics not only supplies the power of real-time speed, but the means to make smarter decisions on the spot, wherever that might be.

Browse Info Solutions will play a strategic role

Browse Info Solutions will play a strategic role in integrating human resource groups to conduct pre-employment background screening on individuals for whom employment is to be tendered. We ensure hiring of employees is of the highest integrity in order to maintain a safe work environment.

http://northforkvue.com/press-releases/93581/browse-info-solutions-announces-launch-of-integrated-resource-management-application-presto-hr/

Data Virtualization for Business Intelligence and Data Solutions

In a nutshell, when data virtualization is enabled, an abstraction layer is provided which hides the applications most of the technical aspects of how and where data is stored. Applications do not need to know where all data is physically stored, how the data should be integrated, where the data store server runs, what the required APIs are, which language to use to access the data.

Advantages and Disadvantages –

-          Users can work with more timely data

-          Less need for creating derived data store

-          Time to market for new reports

-          Transformations are executed repeatedly

-          Complex transformations takes too long

-          Production system overwrites old data when new data is entered

Before Implementing, identify good test project with several to millions of rows in one data source,

Several to 100 columns’ and Low volume of concurrent users.

In traditional process, we use ETL to move data to application specific database and use that data for application or build reports, but in some cases by the time you moved the data, the report requirements can change and here the DV layer allows application to access shared enterprise data services without physically replicating data to your own application schema. With DV, the data would have stayed in the source system, any application could use it without copying it over. If we go beyond structured and internal data, you can use DV to connect to unstructured data (Facebook, Twitter) and external data (3rd party owned data) without owning them in your own infrastructures. Having said that, it is a complimentary tool that you would want to have, but not to replace what you already have in your technology stack.

Data will drive the next wave for widespread Functional Programming adoption

Big data is a popular word, but associated with the problem of data sets too big to manage with traditional databases. The parallel has been the NO SQL era that is good for handling unstructured data, scaling, etc. IT shops realize that NOSQL is useful and all, but people really interested in SQL and Its making a comeback. You can see it in Hadoop, in –SQL like APIs for some “NOSQL” DBs, eg., Cassandra and MongoDB’s  javascript based query language, as well as NewSQL DBs.

A drawback of the SQL is that  it doesn’t provide first class functions, so (depending on the system) you are limited to those that are built-in or UDFs(User defined functions) that you can write and add. Functional Programming language makes it easy.

Even today, most developers get by without understanding concurrency. Many will just use an actor or reactive model to solve their problems. I think more devs will have to learn how to work with data at scale and that fact will drive them to FP.

We have seen lot of issues with Mapreduce. Already alternatives like spark for general use and storm for event stream processing, are gaining traction. FP is such a natural fit for the problems that any attempts to build big data systems without it will be handicapped and probably fail.