At DOUG’s October database forum, Carlos Carballo, principal sales consultant for Oracle, presented highlights from Oracle Open World 2013. During his hour long talk, he focused on four topics:
- In-Memory Option
- In-Memory Column Store
- Oracle M6 Big Memory Machine
- Backup Logging Recovery Appliance
12c’s In-Memory Option
Carlos started with a preview of Oracle Database 12c’s In-Memory Option. It promises to produce real time analytics with queries a hundred times faster, regardless of whether you query an OLTP database or a data warehouse. It also should produce two times faster transaction processing. An Oracle rep in the audience threw an estimated delivery date of 1Q2014 (on Linux at least).
Carlos talked about the pros and cons of row format vs. column format. Since OLTP is designed around rows, transactions run faster on row format. For example, inserting or querying a sales order. However, analytics run faster on column format. Such as a report on sales totals by state.
With in-memory, you won’t have to choose. While the format is row based on disk, both row and column format exists for the same table in memory. Both formats are simultaneously active and transactionally consistent. Both analytics and reporting use the New Column format. It’s different from TimesTen or Flash Cache because this is not a separate product. Carlos reiterated that there are no changes to existing features. Everything stays the same on the disk, just increased memory and more processes.
The in-memory option uses in-memory columnar technology. This is a pure in-memory format with no logging – ensuring near zero overhead on changes, even for OLTP. It uses memory-optimized compression through vector processing, or SIMD, for a 2x to 10x memory reduction and subsecond response times. Each CPU scans local in-memory columns. That translates to a scan rate of billions of rows / sec per CPU core and it can break down the query into 16 parallel queries. Also, it supports table joins up to 10x faster
In-Memory Column Store
Traditional OLTP is slowed down by analytic indexes. The column store replaces analytic indexes which removes overhead on changes and lost space from indexes. As a result, OLTP & batch run two times faster and there is less tuning and administrative overhead. At an Open World demo, they showed off the performance of traditional in-memory vs. the new 12c in-memory option. While traditional in-memory processed 25 million rows per second, the new 12c in-memory option processed 20110 million rows per second. That’s 791 times faster.
This led into a discussion of the capacity and cost effectiveness of various storage devices going from the coldest to the hottest data.
Cold -> Disk
PCI Flash -> Active
DRAM -> Hottest Data
Each tier has specialized algorithms and compression that look at capacity of disk, IOs of Flash, and speed of DRAM. The heat map comes with 12c and will be part of the 12c Enterprise Manager; however, the heat map is not going to be an 11g feature. However, you can use AWR reports to develop policies for how stuff is moved across. If you are using interval partitioning, it can take that into account as well.
Scaling out the in-memory option is supposed to be easy. It is RAC aware and in-memory queries can be parallelized across servers with direct-to-wire InfiniBand. Using INMEMORY, simply requires:
Inmemory area = XXXX GB
Alter table … inmemory
The best part is that the use of Oracle in-memory is transparent to applications.
ORACLE M6 Big Memory Machine
Back in March the M5 was announced. However, it has already been replaced by the new Oracle M6. The Oracle SuperCluster M6-32 is a General purpose box that runs exadata storage. It can use Oracle VMs or zones and can be configured with Solaris 10 or Exalogic. It comes with 32 TB of storage space and 32 processors on a 64-bit architecture. While the storage is on Linux, the other DB Elements are on Solaris.
M6 = Exadata + Exalogic + Virtualization
With the M6, there is zero overhead virtualization. The M6 supports both Solaris 10 and Solaris 11 in electronically isolated dynamic domains. Multiple VMs and multiple domains are possible on the M6 with integrated, hardware-assisted encryption.
Backup Logging Recovery Appliance
Database delta-push pushes only the changed data. With this, real time redo ships from in-memory redo buffers. It can scale to thousands of clients and petabytes of data. The delta store is a validated, compressed database that understands RMAN and can be configured as a remote replica. Plus, with delta-push, data loss exposure is reduced to fractions of a second.