Big data is a popular word, but associated with the problem of data sets too big to manage with traditional databases. The parallel has been the NO SQL era that is good for handling unstructured data, scaling, etc. IT shops realize that NOSQL is useful and all, but people really interested in SQL and Its making a comeback. You can see it in Hadoop, in –SQL like APIs for some “NOSQL” DBs, eg., Cassandra and MongoDB’s javascript based query language, as well as NewSQL DBs.
A drawback of the SQL is that it doesn’t provide first class functions, so (depending on the system) you are limited to those that are built-in or UDFs(User defined functions) that you can write and add. Functional Programming language makes it easy.
Even today, most developers get by without understanding concurrency. Many will just use an actor or reactive model to solve their problems. I think more devs will have to learn how to work with data at scale and that fact will drive them to FP.
We have seen lot of issues with Mapreduce. Already alternatives like spark for general use and storm for event stream processing, are gaining traction. FP is such a natural fit for the problems that any attempts to build big data systems without it will be handicapped and probably fail.