Wednesday, 3 November 2010

The exciting thing about the cloud - for application architects - is the jump in scale: some applications now serve millions of online users in fractions of a second with information refined from huge amounts of data.

This category of application has big data, heavy processing and high transactions-per-second. At a certain level, the cheapest and best solution is an in-memory architecture (see RAMClouds), which explains why every industry player is betting on some sort of “in-memory database/data grid” product - the industry trends point inexorably to this solution for high-value web sites.

What's exciting is that the scale, reach and low cost of this jump in scale will make a whole new raft of applications commercially and technically viable.

The Problem

However, the problem with cloud architectures for mission-critical applications revolves around ACID transactions - specifically, the lack of them. Brilliant engineers have tried to use existing techniques such as distributed transactions (e.g., XA) to provide scalable applications with ACID support. This effort has failed because distributed transactions are too slow and unreliable.

The CloudTran Approach

The CloudTran solution is to unbundle transaction management from the data stores (i.e., databases, document stores).

This echoes the approach of Unbundling Transaction Services in the Cloud - which proposed an approach to scaling cloud databases, by unbundling the transaction management from the data storage functions.

The first change resulting from our unbundling is a central transaction coordinator that sits between the in-memory data grid and the data stores. The coordinator can handle changes from any number of nodes in the grid and can send data to any number of stores. So the new layering is

  • client (e.g. servlet, REST)
  • in-memory data + (sharded) processing (also interacting with messaging)
  • Transaction Coordinator
  • data stores - databases, Hadoop etc.

The second change made by CloudTran is to distribute transaction management - particularly constraint and isolation handling - to achieve maximum performance. The client, the “ORM” (object relational mapper), the in-memory nodes, and the transaction coordinator all handle different aspects of transaction management . This allows even an entry-level configuration to handle thousands of update transactions per second.

The path to the data stores is supremely important for durability of course, and, secondarily for links into data warehousing and other BI that feed off the database. With CloudTran, however, the performance requirements of the data stores changes and gives opportunities:

  • data affinity and foreign-key constraints are handled in the grid, so tables can, for example, be sent to one physical database each, which may avoid complicated sharding
  • latency is no longer important, which means the application front-end can be in the cloud and the durable data in the data center, reducing security concerns

Being Upfront

Successful companies in the future will need faster web sites that calculate more refined intelligence from deeper analysis of personal preferences and social trends. To achieve this, live data will need to move to the front - alongside the processing of services and events - rather than stored in a separate database tier.

The upside of the move is increased competitiveness and the ability to serve a global customer base. The challenge is the uncertainty as new tools are adopted, and risks in strategy and execution. CloudTran's strong, scalable transactionality linking to standard databases gives architects and developers a familiar reference point.

Sunday, 21 February 2010

The Transportation Business

A lot of the action we get involved with CloudTran relates to finance, investment banking and so on. But we met with a media company last week, who have a setup that seems just as suitable for CloudTran and GigaSpaces.

The company collects data about consumer responses to promotions every day - about 10 million data points - then slice and dice this into usable information for the consumer product manufacturers.They're a UK company, with a stable business based on applications in Java and coping quite happily at the moment ... but now having a little think about business and technical strategy in the longer term. This brought a few things to mind.

Globalisation and Consolidation

Consumer information collection and processing? Sounds like a business that could go global - if I were CocaCola, I'd want to get analytics across countries, regions or continents from a single supplier. If the business can possibly go global, the management has to decide if they want to play in that space. (And if they don't, they may soon find someone else occupying that high ground.)

So ... what would it take to go global? Globalisation in this area is likely to lead to consolidation of worldwide processing through a single system in order to save operational costs. This means there could be more processing runs, many of them upwards of 100 million data points.

The 2 hour day

The crunch point for this company is a small processing window, in their case the 2 hours each night where all the raw data is in and has to be turned into saleable information.

So the maximum power of the system is used for 2 hours per day - 8% usage. That matches typical CPU utilisation averages of around 5-10% in the worldwide server base.

For large enterprises, the answer to making their estate more efficient is virtualisation, which can move your system utilization up to the 40-50% range. For SMEs, the answer is the cloud. Buy two hours of a few machines - maybe $50 per night - for heavy processing power, then release them. Compared to the value of the information being provided, surely the cost must be trivial. And the "-as-a-service" model means that management, provision, backups and all that stuff is done by the provider, simplifying the SMEs own IT function.

Red-hot Oracle

Given that the company's Oracle database machine is already red-hot during peak processing, positioning for growth means looking at the data processing structure. The current process consists of a number of select+calculate+save cycles - reading then writing to the database disk.

In thinking about this, I was struck by how long the 'back-end database' architecture has survived, and how the disk vs. electronics equation has changed.

My company evaluated databases back in 1982-3 and ended up buying Oracle. At that point, the hot CPU was Motorola 68000 at 1 Mips, typically with 256kb memory. The hot disk was the Shugart/Seagate ST-412 - 10MB capacity, transfer rate of 5Mbps (million bits/sec) and 85ms average access time (on the Seagate site ... or 15-30ms in Wikipaedia!).

Nowadays, for testing we have a number of fairly hot Intel i7 920, 4 cores - which I reckon is at least 5,000X faster than the 68000 - with 8GB memory (=16,000X). The hard disk is 250Gb (25,000X), transfer rate of 150MB/s - million bytes/sec - or 1.2Gbps (240X), and access time of 13ms. (probably 6.5X).

ThenNowMultiplier (X)
CPU1 Mips5,000 Mips5,000
Memory256kb8Gb32,000
Disk Capacity10MB250GB25,000
Transfer rate10MB1.2Gbps240
Access time85ms13ms6.5

So - disk capacity has kept up with CPUs and memory, but disk transfer rates are well down by comparison ... and access times have fallen off the pace dramatically. And furthermore, the discrepancy is going to keep getting worse: in the next 10 years, disk seek time and IO rates won't improve greatly whereas CPU and memory will improve by 10-100 times. There just aren't the advances in mechanical devices there are in electronics.

For an analytics application like this, paying for expensive licenses to pump information in and out of the database through a bunged-up pipe when it's irrelevant to the business - that's just a waste. It is much less effective than loading the in-coming data and doing the analysis in-memory.

Easy Scalability

What I was trying (!) to say in the previous section was, the company's database-centric architecture is almost certain to hit the wall when the volumes go up by 10-100X - these sort of changes in quantity usually trigger a qualitative change. Especially if the DB machine is
already smoking.

The combination of GigaSpaces' scalable data partitioning with CloudTran's automatic distribution and collection of data across machines meansthat more nodes, more memory and more processors can easily be deployed on the application.

As CloudTran allows you to easily create a number of deployments and keep them in step with the app, it might be worthwhile to have different size deployments on hand, and spin up large and small versions for processing depending on the size of data-set to be processed.

Data Grid = Compute Grid

A common approach to using grids is to have separate data and compute grids.

However, this company's application can hugely benefit from putting related data into a single machine; when you spread this out, the separate grids become a single data+compute grid. This means that the data to perform a distinct step in the calculation is mostly, if not completely, within the memory of the CPU running a calculation.

Our rule of thumb is that if the cost of getting a piece of data from an in-memory transactional store is 1 unit, then across the network the cost is 50-100 units, and from a database 50,000-100,000 units.

By spreading the data around, the data+compute grid also can apply many processors to the data. Multiply the above numbers by 10 or 100 nodes working on the problem and soon you're talking about real money.

Organising this with CloudTran takes no effort - the co-location of the data, and scatter-gather of information from other nodes, is all done automatically. The calculations are split up, with each node working on its sub-process, using its local data.

Overlaps

We mentioned that the typical processing cycle is select+calculate+commit. In an in-memory architecture, the select will be done in-memory, as of course will the calculate. This leaves the "commit cycle".

If the information being committed is simply intermediate calculations, then in an in-memory architecture you can skip that step entirely. However, there will be cases in long-running calculations where you won't have the time to start over if anything goes wrong: in that case, you really do want to commit this information to a database.

CloudTran makes it possible to commit results to a transaction buffer machine very quickly (i.e. small numbers of milliseconds for small transactions) and have the processing cycle move onto the next phase while the previous one commits. In other words, we overlap the commit of one stage with the in-memory processing of the next. If the slowest part is the information analysis, using CloudTran to commit in parallel means that committing costs nothing (in the critical path sense).

And What About Flash?

Now that I've moaned about hard disks, does flash ruin the whole argument? Well, for analytics applications, the answer is probably not. If in any given phase of processing (select or commit) your database machine is smoking today, then adding flash drives won't help at all because the bottleneck is the CPU. You won't be able to create a scalable solution with a "database tier" architecture.

What is the database tier doing with its CPU cycles? The database is doing some "real" processing steps, such as merging indexes and sorting for SELECT. Then there are the overhead steps. On the storage side, this means packing rows into storage pages on disk (e.g. Oracle blocks), constructing indexes and mapping them to storage. On the comms side, the overhead steps are interfacing through JDBC, serialising the response and then the processing for TCP/IP.

You may be able to improve the "real" processing steps - but to scale up by 10-100X, you'll almost certainly hit the "overhead" wall. This is why we prefer to build on an integrated data+compute tier.

Stephen Foskett has a complementary analysis of flash and cloud storage issues.

IT - Data and Processing

A while back, I visited a large IT company's headquarters with my boss. As we walked towards reception, he looked up at the huge building and said "Guess how many people work here?". I forget my answer - his was "About a third of them".

Ever since I started in enterprise IT, I have been struck by how many IT components don't do real work either. So much shipping data around goes on. If you're looking for an customer's personal information and orders, the real data might be 2KB - but you'll probably end up shifting a many megabytes around various components to get to it. Then the real processing is usually trivial - a few hundred instructions.

As Peter Drucker would have said, right now we're in the transportation business - we should be in the information business.

Saturday, 6 February 2010

CloudTran Reading list

I just got asked for some background reading on CloudTran. I've been meaning to give my introductory reading list for some time, so here goes.

The order here is hopefully a step-by-step guide to the issues; should be a bit easier for you than jumping in at the deep end like I did.

1. http://www.openspaces.org/display/DAE/GigaSpaces+PetClinic
This is what started it all off - see slides 13-15 in the slide show.

First, it indicates there's a fair amount of non-trivial work for your regular Java application developer. Second, this raises as many questions as it answers - in particular, if you have a lot of information, how do you distribute it across a grid and then how do you integrate a transaction with backend databases and other stores.

2. Pat Helland's Apostasy - Life Beyond Distributed Transactions - the original 'slit-your-wrists' exhortation and still the best.
Pat says, forget doing distributed transactions in a scalable application - "distributed transactions are too fragile and perform poorly".

Pat's statement of the problem is brilliant; but his solution would mean that application programmers would end up doing lots of infrastructure work, which in my experience is a no-no. Surely the better answer is to productise this infrastructure functionality, so application developers have a simple sandbox and can quickly deliver business results.

2a. This leads onto Todd Hoff's highscalability.com site, and articles like http://highscalability.com/amazon-architecture. Be afraid... it's all too complicated.

3. Andy Bechtolsheim's talk at HPTS: http://www.hpts.ws/session1/bechtolsheim.pdf.
And here is James Hamilton's one-page in-flight summary.

Andy was one of the original founders of Sun.

Bottom line for the next 10 years:

- Memory and CPU's will become cheaper
- More memory and more cores
- The bottleneck of access times to hard disks is going to get 10x worse, which will mean they are gradually phased out for live data
- Flash memory will take over mainstream applications for storage sizes > main memory. But how many writes can you get out of them...
4. Stanford's Case for RAMClouds. RAMClouds means 'all active data in memory rather than on disk'
And here's an easy-entry synposis of the same article.

By the time it was published, RAMClouds wasn't new ... but it does tie the previous paper into forward thinking about architecture, and gives a theoretical reasoning as to why RAMClouds will be one of the new architectures.

I actually saw Todd Hoff''s overview piece first - http://highscalability.com/are-cloud-based-memory-architectures-next-big-thing.
5. The requirement. Google "600 billion RFID" and go from there.
Basically, applications will continue to get larger. A million on-line users isn't worth shouting about today. This is the case for thinking about application architectures that will survive the next 10 years - there are going to be loads of customers out there wanting information now.
6. Performance matters (admittedly from Akamai marketing literature):
2006: Respond to users in 4 seconds

And we're getting more impatient:

2009: Respond to users in 2 seconds

This is the business driver: handle more customers and give a better experience (and get a competitive edge).
7. The fundamental platform: Julian Browne's Space-based architecture.
Also GigaSpaces white papers. This is based on JavaSpaces. Here is Bill Olivier's take on the big problems JavaSpaces solves - http://www.jisc.ac.uk/media/documents/programmes/jtap/jtap-055.pdf -- see section 2.3.1.

"Jini addresses the hard distributed computing problems of: network latency, memory access, partial failure, concurrency and consistency".

The big thing developers have trouble getting their head round, is that in a scalable system every failure event must be handled as part of the application. Most developers are used to letting ops worry about failure modes. It's really hard in a large-scale distributed environment to get this right.
8. How to distribute data for application programmers: partitioning and the entity group pattern. This answers the question, "how do I spread across nodes for best performance but easy management".
Billy Newport has a good overview of grids and partitioning.

Google App Engine defines Entity Groups as the limits of transactions.
In CloudTran we use this purely to define where information goes; cloud transactions can span entity group boundaries.
9. NoSQL (forget SQL) and BASE - An ACID Alternative. The database as we know it doesn't handle scalable applications and specialised requirements well.
The thrust of NoSQL (or 'not only SQL') is: if you really want to get scalable data, you can't have SQL and ACID charateristics. And there are certainly beyond SQL databases like BigTable that have highly specialised characteristics.

For a hilarious counter, see Brian Aker's talk.

In CloudTran we provide transactionality that can provide transactionality for SQL and no SQL, coordinating in-memory data with eventual consistency at the data sources. Some of SQL functionalities for joins has to be done by hand, but it's about 90% there.
_______________________________________

This should get you started Andrea.

Monday, 25 January 2010

Java Architecture and the Cloud - Players, Patterns, Products

Are you curious about where Java architecture might go in the next decade ? Do you need to skill yourself up for application development in Clouds ?

Having canvassed opinion, we've decided to put together a meeting for architects, project leaders and developers to discuss how to develop large scale, high speed, fully transactional applications in the Cloud. The evening will include a review of market trends, key players, products and architectures for cloud / grid commercial applications. And we'll also be announcing CloudTran, a new product to bring the Cloud into the mainstream as a platform for Java developers.

We're delighted that we'll be joined by Dan Stone (http://blog.scapps.co.uk/) who will give us a run down of the leading products in this area based on his forthcoming book on the subject. And Jim Liddle, UK Operations Director, GigaSpaces who will talk about the game-changing features of GigaSpaces XAP.

There'll be time on the day for an open session as we're keen to get a dialogue going, so please come armed with your questions, comments, views, war stories. The evening will be held on Thurs Feb 11th from 5pm at The Masons Arms, London W1S.

If you'd like to come along, please sign up at http://www.eventbrite.com/event/533997200

Tuesday, 5 January 2010

What does CloudTran add over GigaSpaces XAP ?

CloudTran's goal is to make it as easy as possible for Java developers to write mission-critical applications that are scalable and fast. CloudTran is layered on top of GigaSpaces XAP. As both products serve Java developers and provide transactions, we are often asked what CloudTran adds over GigaSpaces. Here's the bite-sized answer.

1 Coordinating data grid and storage services. GigaSpaces transactions coordinate operations in one space or between spaces, and it has a mirror service that does asynchronous write-behind to a database. However, for multi-database or multi-space scenarios it does not preserve atomicity or consistency between the spaces and the data storage services.

CloudTran provides rock-solid ACID transactionality that coordinates operations between the data grid and storage services, without limiting the speed by disk writes. This means that developers using rich domain models spread across different data stores can use transactions without worrying about whether the particular deployment configuration can handle it.

2 Object-Space-Relational Mapping. Java developers are used to working with an object view, where relations between objects are held as object references. But in a scalable, cloud-based system, objects are distributed across different machines and object references aren't possible. Instead, the relation must use an identifier - like a foreign key - that can be used to simulate an object reference by doing a lookup and fetch under the covers. This means that there need to be different types of objects: one used by Java developers in their app, having object references; and another stored in the space with foreign keys in place of the object references.

As if that wasn't enough, backing up Java space objects to a SQL database requires the usual object-relational mapping. The code has to load objects from the database into memory, as well as saving updates from memory to the persistent store.
In other words, there are three different views of information that need format definitions and mapping operations between them. CloudTran generates code to make sure this is all done correctly: JDBC is supported out of the box; other types of persistent stores can be mapped via plug-ins.

3 Automatic Configuration and Deployment. GigaSpaces XAP is an application server-level product. This means it has more features than just a caching product, but it needs to be configured and deployed. As Stefan Norberg says in his post, Why it sucks being an Oracle customer, the down side is the developer has to do "configuration, deployments and all of that". This requires a lot of investment in learning config and deployment concepts and increases cost and risk from getting it wrong.

CloudTran provides modelling and code generation to help developers get over this hump. Modelling is via an Eclipse plugin which uses terms that developers can readily understand - entities, data sources, services and queue receivers/topic subscribers. Then the code generation makes it easy to convert the model into a production system - just add business logic.

Developers can also model multiple deployments, tied to the application model. The default deployment is to the Eclipse UI, but Windows, Linux and Amazon EC2 are supported. We have found it especially useful to be able to model deployments for different purposes (such as Dev, Test, UAT and Live) strictly driven by the application model - it avoids the finger trouble of reworking the configuration by hand when the application changes.

Wednesday, 11 November 2009

One line of trace

We're doing about 2,000 tiny (price-tick) transactions on our pair of quad-core boxes now. The CPU is still only at 60% utilisation, so there's still a bottle neck or two.

Say we eventually get up to 3,000 transactions at peak, then each transaction will be using around 300 microseconds of CPU.

Now it turns out that we have left one line of console trace output saying "I have written transaction X" which takes roughly 60 microseconds on a single core, or about 15 microseconds of aggregate horse-power on our quad-core box.

So the point is: one line of console trace will use about 5% of available CPU when we reach our target ... is it really worth it? I sense another configuration option crawling out of the woodwork ...

Tuesday, 3 November 2009

Thousands of transactions per second

We've recently been performance testing CloudTran and we're now doing over 2,000 transactions per second. This is on a single-CPU, quad-core box running the transaction buffer and being bombarded from other machines over the network. "CatchingTicks" is the test, and the ticks are price changes on a market - Admittedly those 2,000 transactions are among the smallest we can do but the CPU runs them at less than 50% utilisation so we can expect much higher throughput in the future.


We spent ages stuck at around 900 transactions per second, looking at every part of the system ... until we realised we were using the wrong variant of the test. We assumed we were using the performance test mode of the test; in fact, we were using the "isolation" test, to check that transactions working on the same record/row observed isolation correctly. So we've tested isolation to death, inadvertently. And it wasn't all wasted time - when you look at code with suspicious eyes and a particular problem in mind, you realise there are angles you missed. Performance testing is certainly interesting for tech-heads. We've moved from fairly simple synchronisation techniques to Java's concurrency library and Atomic variables with the non-blocking synchronisation.


To do functional debugging, we use heavy tracing of anything that moves, but for performance we use 'OperationTimer' objects - timing the start and end of an operation. Analysing these logs is exciting - you have to be very careful that you're measuring what you think you are. Our life has been made much easier by having coordinated nanosecond timers, so that all the bombarder machines calculate their time relative to the TxB machine. Then, instead of looking at all the logs, we take a one-second slice of the OpTimer output (about 10,000 lines) and then sort on the time of the log. The important thing is we can then trace an individual transaction's path between the various boxes quite easily - the sorted trace is in global time order.


Unfortunately, we're now bumping up against Heisenberg's uncertainty principle - taking just the OpTimer measurements reduces speed by a factor of 3, which almost certainly changes the nature of the events ... timing is everything! So I suspect we'll end up using something like Dynatrace in the near future.


Another point to watch out for is the server's HotSpot optimisation - this affects the performance of the GigaSpaces GSCs. We normally run 50-100,000 transactions through the system before taking the timings - this indicates the bar for doing performance tuning is set very high on the server configuration of HotSpot.


Testing continues as we use the results to continually improve perfromance. And for now it's back to the logs.