Map-Reduce and Hadoop are distributed processing frameworks, not database systems. Complex analytics and persistent data management go well beyond the capabilities of these frameworks.
Building an equivalent computational database from the components of the Hadoop ecosphere would require extensive programming. Matching all the functionality delivered by SciDB would require manually integrating the Hadoop file system HDFS, the Hadoop database HBase, PIG for distributed processing, HIVE for data warehousing and a standalone analytics package like R or SAS.
Another way to look at it: SciDB supports ad-hoc analytics by letting you focus on the data analysis. With SciDB, you can fire off a query the moment it occurs to you. By contrast, Hadoop imposes a crushing burden of infrastructure setup, data preparation, mapping of this, reducing of that and tedious architectural coding. Nothing that requires such effort can rightly be called ad-hoc.
Hadoop is merely a framework for programmers, whereas SciDB is a finished product for data scientists, analysts and researchers. SciDB merges all the functions needed for flexible, complex analytics into an all-in-one, seamlessly integrated package.
SciDB also delivers a lot of valuable extra functionality like N-dimensional partitioning, implicit indexing, updates and data versioning.