~~CLOSETOC~~
<html><font color=#990000 size="+2"><b>The Platform Basics</b></font></html>

{{ :wiki:general_moc:rd-platform-doodle-1.1.png?nolink&500| StreamScape Reactive Data Platform™}}

The **<color gray>Reactive Data Platform™</color>** is a new [[wp>Middleware (distributed applications)|middleware]] technology that makes use of [[wp>Reactive programming]] concepts to manage, integrate and analyze in-flight application data.  We call this **<color gray>reactive data processing</color>**.  The platform is purpose built for modern Web and Cloud computing needs and best described as a **<color gray>data processing network</color>**. [[whats_app|Reactive Data Processing]] enables rapid development of high-performance systems that can process and aggregate vast amounts of data without storing it.  In the software stack this technology fits in the service layer between applications and enterprise resources; serving up data structures, managing transient data and facilitating cross-application data exchange. 

Why is this important? Developing scale-able applications that analyze [[ http://en.wikipedia.org/wiki/Data_at_Rest#mediaviewer/File:3_states_of_data.jpg | data in motion]] and integrate with enterprise applications, [[wp>System of record|system of record]] or historical [[wp> Data_at_Rest | data at rest]] is hard.  Users often have to access disparate data sources such as Cloud or Web Services, Application Servers, Databases, Files or Messaging Systems; transform and aggregate such data and then push it to consuming applications or devices.  A typical solution is assembled together from multiple components and requires specialized knowledge of many disciplines.  This tends to be slow, expensive and results in brittle systems that require highly skilled specialists (programmers) to manage.  The **<color gray>Reactive Data Platform™</color>** empowers data analysts and system integrators, allowing them to create data processing flows visually or through declarative script; requiring no specialized skills or knowledge of programming. This makes data integration, application development, deployment and maintenance radically simple.  It saves time, money and future-proofs the architecture.

In many ways a reactive data platform plays the same mediation role as classic middleware such as [[wp> enterprise application integration ]] broker or [[wp> Enterprise Service Bus]].  Unlike prior solutions, this technology gives users the ability to manage and query in-flight data transactionally, as well as integrate structured or unstructured content.  While simple integration, query and transformation tasks may take minutes to configure, complex data flows involving aggregation, concurrency, stream processing and asynchronous logic are just as easy to implement.  As such, motivation and design concepts behind the architecture are significantly different from its predecessors.

<html><font color=#990000 size="+1"><b>Design Concepts</b></font></html>

The StreamScape data engine combines aspects of Service Oriented Architecture, Data Management, Caching and Event Stream Processing technologies into an integrated data processing platform whose resources (services, data collections functions, etc.) may be accessed via SQL, JDBC, Browsers, RESTful Web API and even Instant Messaging applications. The following concepts are critical to the engine's design.

====Event Stream Processing====

Data engine components communicate thru a peer-to-peer communication system called an <color green>Event Fabric™</color> allowing them to exchange event streams or messages that contain structured data such as objects, documents or tabular row sets.  

Event Stream Processing (ESP) is a relatively new computing paradigm, designed specifically to address the requirements of distributed and real-time, high-performance data processing. In contrast to traditional data management systems wherein data are first stored, indexed and then processed by queries, event processing applications work on data in-flight, as it moves through the system.  Data streams flow through applications as discrete events (or event tuples) comprised from a set of structured data elements. Queries for event filtering, correlation and aggregation are applied to in-flight data and results are delivered to consuming applications, facilitating streams of actionable information. Reacting to such events provides an organization with real-time situational awareness, allowing it to quickly adapt to the ever changing landscape of enterprise data.

The Reactive Data Platform™ exploits event stream processing as part of its communication fabric and data processing facilities, allowing users to create scalable, event-driven data processing applications that can be easily integrated with other event stream processing solutions. The data engine is a technology complementary to Complex Event Processing (CEP), providing critical data query capabilities, event sourcing, data mapping, integration and event-based communication facilities. The engine's unique features allow a broad range of enterprise, web application and social computing activities to be turned into actionable events, expanding the value of event processing technologies and making them part of the collaborative computing environment.

====SOA and Composite Applications====

Service Oriented Architecture (SOA) services enable companies to expose critical business functions as reusable components.  Composite applications, also referred to as service-oriented, are built by combining existing services into a new application or process flow.  SOA addresses the challenge of integrating multiple disparate systems that make use of different languages, network protocols and data formats.  As an architectural approach, SOA improves business agility, fosters re-usability of application logic and eliminates the boundaries between business domains

The Reactive Data Platform™ is built from the ground up to host composite, service-oriented applications by supporting a broad variety of protocols and providing flexible service interface definition facilities, allowing projects to be completed at a fraction of the time and cost of traditional solutions.

From a business perspective, SOA can help organizations respond more quickly and cost-effectively to the changing market conditions and needs of their customers.  It is common for departments within a company to develop and deploy SOA services using different implementation languages and network protocols, using message-oriented middleware to facilitate communication between the loosely coupled service components.  Loose coupling allowing services to be organized (or re-combined) into process pipelines or so-called micro flows capable of performing complex data processing tasks.

====Structured Data Management====

In addition to hosting business logic, the platform provides virtualization and storage facilities, called <color green>Application Dataspaces™</color> for management of structured (Tabular) and unstructured data such as Objects, Files, XML or JSON documents.  [[wp>Dataspaces]] provide an essential abstraction on top of existing data management systems that overcomes many problems in data integration.  Dataspace content may be stored entirely in-memory, logged, replicated or written to a file system or backing data store based on application needs.  Data may be organized into a variety of collections such as Maps, Tables, Files or Queues and queried using SQL-like syntax.

Architects and distributed system developers often encounter a common problem:  How to achieve a reliable and efficient structured data exchange between applications and service components?  Without this critical capability application interfaces become brittle and difficult to change over time, resulting in problems with interoperability and system integration.  Such issues directly impact an organization’s bottom line, increasing cost of ownership and making change management a time-consuming and error-prone task.

The Reactive Data Platform™ provides a robust, flexible framework for object mediation and data marshaling called Structured Data Objects (SDO). Using structured data objects developers can serialize user-defined data structures into a variety of formats providing a language-neutral mechanism for structured data exchange, object persistence, API definition and more.  

Data objects may be stored and replicated using the fabric’s Application Data Spaces™.  SDO may also be queried and modified by using an object-oriented API, HTTP and Browser applications, as well as standard SQL. This capability is critical to the success of both, event stream processing and service oriented architecture disciplines.

The SDO framework offers a library of functions for indexing (annotation), serialization and object persistence as well as utility classes for semantic data translation and validation.  Binary, XML and JSON formats are supported and may also be used to affect fast and efficient data mapping and transformation of objects and XML documents. Developers can take advantage of an extensive library of data validation and formatting functions in order to tweak their output documents or conform to specific input types. The library includes support for XML Name Space resolution, default null handling, text compression and more. Developers may specify arbitrary relationships between objects types, thereby defining their own taxonomy and data ontology based on application needs. 

Structured data objects can be utilized as body elements of an external messaging system (such as JMS), written to data collections hosted by application data spaces or used as payload in Event Datagrams, providing a flexible solution for structured data exchange between clients and  application fabric components.

===User-Defined Types===

Any Java class accessible to the engine can be made into a user-defined data type and used by the application fabric.  In order for a user class to be recognized by the framework it has to be registered as a Semantic Type.  Registering (or aliasing) a class simply associates a distinct name with the object.  Classes that are registered as semantic types may be organized into user-defined relationships according to an application specific taxonomy.  This allows developers to define semantic relationships between object types based on business needs rather than the object model.  For additional information see Chapter 2: Semantic Types.

===Models and Instances===

Semantic types are considered data type definition entities, in the sense that they describe a data structure or model.  Users may create and use instances of the type programmatically and serialize such instances into a variety of formats without writing additional code, using language-native serialization or preparing the objects in any way besides simply registering them.

In application fabric parlance, the semantic type is considered the model of a specific object and a concrete class is considered an instance. This terminology applies to all known objects in the application fabric, including events and data collections.  It is typically expressed as [Model].[Instance] for the purpose of query reference or persistence.  For example, an instance of a ''String'' called myString would be expressed as ''String.myString'' and serialized to XML as a ''String.myString.xdo'' file entity.  An event prototype would use a similar taxonomy and refer to an instance of event model TextEvent called txtData as TextEvent.txtData, persisting it in the repository as ''TextEvent.txtData.xdo'' file entity.  A data collection of type Map with instance myMap will be referenced as Map.myMap for the purpose of type classification.  Models and Instances are used to classify persistent entities within the application fabric.  Entity names are intended to be unique across the domain.
\\

<html><font color=#990000 size="+1"><b>Platform  Concepts</b></font></html>

====Deployment Descriptor====

A deployment descriptor is 

====Runtime Configuration Cache====

A runtime configuration cache holds application, data, and configuration information for a particular engine instance.  Although each engine has it's own cache, when part of a sysplex the engine synchronizes critical data such as global meta-data and security with other nodes.  The result is a so-called [[wp>Shared nothing architecture|shared-nothing architecture]] that allows each node to function independently while being part of an integrated data-processing network.

See [[runtime_configuration_cache|Runtime Configuration Cache]] for more information on configuration cache internals. 

====Engine Runtime====


====Embedded Runtime===

The //embedded runtime// option allows users to deploy a StreamScape data engine as a component within another Java Application, Apache Tomcat, an Application Server or any Java-based product that supports the [[wp>Java Database Connectivity | JDBC specification]].  This powerful capability allows users to turn almost any Java application into a fully-functioning application fabric node.  Such applications can then use the StreamScape engine to exchange messages and data, replicate content, state and tap into the full capabilities of the broader application fabric, such as virtualization, event processing and data analytics.

The engine is certified to be embedded in a variety of technologies including Apache Tomcat, applications servers from Oracle/BEA and JBOSS as well as development environments such as Netbeans and Eclipse. User experimentation is welcome.   See [[advanced_topics#Embedding the Runtime | Advanced Topics]] for more detailed information about how to to embed a runtime into a Java application container. 

====Management Nodes====

====Task Nodes====

====Managed Nodes====

====Managed Resources====

Refers to underlying back-end resources that have been virtualized by the application fabric such as Virtual Tables, Database Connections, Applications (such as Salesforce) File System proxies (ie. Hadoop) or File Tables.

====Acceptors====

Protocol acceptors are configurable network listeners that allow client applications to connect to engine instances (nodes).  Protocol acceptors are bound to specific ports of a [[wp>Network interface controller|Network Interface card]] and may not always available to network clients.  Specifying the host name as ''LOCALHOST'' limits the scope of network communications on most operating systems, allowing only local client applications that run on the same machine to see and connect to the engine.

Acceptors technically differ from the concept of [[basic_concepts#Access Points]] because they may be configured for protected (hidden behind a firewall) or private (inter-machine) communications.  Acceptors support core communication protocols used by the application fabric, however not all types of acceptors may be used for query and management of the application fabric.

The following protocols are supported:

^ Protocol ^ URL ^ Description ^ Management & Query ^
| TruLink Protocol | TLP  | StreamScape proprietary protocol for Messages, Events and Structured Data Exchange | yes |
| HTTP Protocol    | HTTP | HTTP Protocol that supports Requests, HTTP Streaming and Structured Data Exchange  | yes |
| XMPP Protocol    | XMPP | Extensible Messaging & Presence Protocol (([[wp>XMPP]] protocol used by [[wp>Jabber.org]] and similar Instant Messaging Clients)) | no |

When application engines are deployed as a [[basic_concepts#Sysplex]], potentially behind a firewall in a Cloud ([[wp>SaaS]]) environment, not all of the acceptors will be visible to clients. Those that are available for direct connection will be referred to as [[basic_concepts#Access Points]] and those that are not will simply be internal acceptors, used by the application fabric engines to communicate.

====Access Points====

Access points are publicly available [[basic_concepts#Acceptors]] (network listeners) that can process requests via specific network protocols.  An access point is associated with a specific TCP/IP port and supports either TLP ((The StreamScape specific Tru-Link Protocol)), HTTP(S) or XMPP type of communications.  In StreamScape parlance access points exposed by Management Nodes that use either TLP or HTTP protocols are considered Management Access Points.  Clients connected to management points may administer processor nodes ((Also called Task Nodes)) and perform critical tasks such as Deployment, Check-Out, Check-In, Synchronization,  Create and removal of application fabric nodes.
 
====Routed Node Access (Proxies)====

====Protocol Exchange====

====Protocol Links====

====Protocol Rings====

====Sysplex Partitioning====

====Re-Partitioning====

====Scavenger Process====

====Fabric Resource Module====

A fabric resource module (FRM) is a mechanism for packaging and distributing application engine configuration artifacts. An FRM is a ZIP file with a special <color gray>**.frm**</color> extension that contains, among other things a compressed copy of the <color gray>**.tfcache**</color> repository along with version and change information.

Users should not edit an FRM archive by hand, but rather use tools such as Workbench or SLANG to work with its contents.  Resource Modules may contain partial repository contents, include other archives and data.  They may also be encrypted for security purpose by the platform tools.

Resource Modules are used to exchange configuration data during Check-out, Check-in and Synchronize operations.  They may be used to take a configuration snap-shots and store it in a Source Control environment such as SVN or GIT.

====Creating an Engine====

====Deploying an Engine====

====Un-Deploying an Engine====

====Check-Out an Engine====

====Check-In an Engine====

====Synchronize an Engine====

====Removing an Engine====

====Topology View====

\\

<html><font color=#990000 size="+1"><b>Event Fabric Concepts</b></font></html>

At the heart of the application engine is the Service Event Fabric™, a self organizing event cloud capable of hosting load-able application components and providing facilities for adaptive peer-to-peer communications. The event fabric is a //patent-pending// network of light-weight messaging agents, referred to as sysplex nodes.  A node may function as an independent container for service components and may be embedded into Java programs, application servers or reporting tools turning such applications into full-functioning fabric nodes.  The event fabric allows components to easily communicate with each other using Publish/Subscribe, Direct (point-to-point) Links, Queues or task oriented (cooperative worker) communication models, utilizing a content based addressing scheme.

====Content Based Addressing====

The application fabric makes use of content-based addressing and routing between participants wherein event producers and consumers are matched by the content identifier (( an //Event Id// )) of the data they exchange.  In contrast to subject-based addressing used by traditional publish and subscribe messaging systems where a queue, topic or subject identifies a discrete communication channel, an event id represents the content and structure of the data.  

Raising an event with a given id advertises that data with specific structure and content has become available.  Consumers register interest in events by specifying an event id or filter and may narrow the scope further by applying an event selector.  Selectors use SQL-like syntax to choose events of interest based on their content.   
   
Content based addressing allows fabric components to engage in direct exchange of data without the need to define and manage numerous communication endpoints, eliminating the most time consuming and error prone aspects of messaging application development. 

====Events and Event Datagrams====


====Event Id====


====Event Model====


====Even Data (Payload)====



====Event Prototype====

Event structure is defined by creating an event prototype and associating it with a unique event id.  In the runtime context an event’s payload is based on Java objects; whereas the client context may provide language-specific bindings that will result in generation of Java classes.  Bindings for XML and JSON payloads are also supported.

The fabric mandates that all data objects are known entities and requires developers to register their classes with the Object Mediation Framework (OMF).  Registered classes are called [[Semantic Types]].  Any Java class may be defined as a semantic type and be used as event payload. Classes need not implement any special interfaces or serialization code.  The OMF handles object graph resolution and manages all aspects of data marshaling and un-marshaling.    

<image here>

Classes that are registered as semantic types may be organized into user-defined relationships according to an application specific taxonomy.  This allows developers to define semantic relationships between events and may be useful in constructing event filters.  The fabric allows developers to save semantic type definitions and to distribute and synchronize them between participants.  A library of system types is provided for use by developers and includes Row Array, XML Document, Row Set and SQL Query objects. 

Defining an event prototype is easy.  Developers may extend any of the system event types such as Text or XML.  Alternatively they may define their own semantic type and use it to configure a new instance of a Data Event.  The event is then registered as a prototype and assigned a unique event id.  Once registered, the prototype may be used to obtain a new instance of an event that is ready for transmission.  Prototype definitions may also be saved, distributed and synchronized between participants.

Similar to a message, an event datagram contains a number of system properties such as event id, time stamp, event key, or correlation id.  Developers may also add their own properties to the event based on primitive data types.  Event properties are part of the prototype’s definition.  Event Selectors may reference the properties of an event to filter and match on specific data content.   The selector supports SQL-like syntax allowing consumers to define complex event matching criteria ((The Service Event Fabric is not a Complex Event Processing engine.  Rather, it is expected that the fabric works with CEP engines sourcing events and creating streams for such engines to process.  The ability to define and share event definitions provides a powerful platform for stream sourcing with interoperability across various CEP vendors.)) 

====Event Identity Management====


====Actionable Event====

An actionable event is an internal event raised 

<code dsql>
  create event trigger on [event.file] as e
    when fileAction = 'CREATED'
      as
       {
       raise event e;
       } 
</code>

====Event Scope====       

Fabric events may be scoped to a single component (local), all components within a specific runtime (observable) or all fabric nodes (global).  This allows developers to enforce traffic isolation not possible with conventional Publish/Subscribe messaging systems.  Event scope may be set at the component level making it possible for certain runtime components to function as bridges.   
{{ :wiki:concepts:evscope.png?nolink&800|}}

Traffic isolation is useful in a number of situations.  For example, disabling global scope ensures that events targeted at a specific runtime component or system, are not erroneously picked up by un-intended external recipients.  Disabling observable scope allows components to raise events that only they can see.  This allows fabric components to use the same event passing API for internal communications as for cooperative processing.

Controlling a component’s event scope has another important side effect.  Disabling global scope allows multiple runtime components (and processes flows) to make use of the same event types in parallel.  Because observable events are only visible within the runtime, even when components in different runtime environments raise events with the same event id there is no possibility of cross-talk.   Unintentional cross-talk is a very common problem in Publish/Subscribe messaging systems.  Because such systems do not offer traffic isolation capabilities, developers often have to create their own mechanisms.  

By contrast, event prototypes and event id may be reused for different purposes without the risk of mistakenly triggering logic in other components that use the same event id.  Process flows and entire runtime environments may be safely cloned in this fashion making reuse easier and improving developer productivity.

====Durable Events and Caching====

{{:wiki:concepts:e-cache.png?nolink&400 |}}

Event producers support the ability to cache events at the source.  Cache-able events are said to be durable and must be created as such by the producer.  An event cache allows the producer to retain the most recently raised events in a local memory buffer.  Buffer size may be configured to cache the last nnn values.

When a new consumer is started for a specific event, the events in the cache are delivered first.  Afterwards the consumer is joined to the live stream and begins to receive events normally.  In case of a sparse producer this mechanism allows consumer applications to initialize state based on the most recent set of events they may have missed.  If the consumer is occasionally connected the cache may be used to hold event datagrams that were missed between connections.

====Event Selectors====

The application engine provides a powerful facility for selectively filtering events by content.  In addition to filtering by event id, consumers may specify Event Selectors, thus allowing events to be filtered based on their properties by using an SQL-like syntax. Event selection may be applied to header fields or the user-defined properties of an Event Datagram.  By defining an event selection clause the application ensures that only datagrams with content matching the selector criteria are delivered to the consumer.

The selector is an extension of the Message Selector capability as outlined in the Java Messaging Service (JMS) specification.  Similar to JMS an event selector is a String whose syntax is based on a subset of the ''SQL-92 conditional expression'' syntax with several notable differences.  

  * Event Selectors support advanced date comparison and date range arithmetic

  * Event Selectors allow for regular expression matching of properties

  * Event Selectors may match on Domain and Range collections allowing criteria to be changed dynamically

  * Event Receivers may dynamically change selection criteria between receive operations 

  * Event Selectors may be pre-validated for reuse, eliminating run-time syntax errors

  * Selection may be performed against Annotated Fields  allowing data-aware filtering by payload content

The fabric optimizes selector performance by pushing match logic execution into the dispatcher of the event producer. This technique ensures that only events in which consumers have registered an interest are actually raised.  Dispatcher based filtering is extremely efficient, capable of processing hundreds of thousands of events per second without significant impact on throughput or latency of the system. 
\\

<html><font color=#990000 size="+1"><b>Language Concepts</b></font></html>

====Reactive Programming (RPL)====

Reactive programming is a declarative approach for defining business logic that spans multiple inter-dependent components. Declaring dependencies between data elements creates an link between system components, allowing dependent components to automatically react to changes in the source.  Reactive programming languages (and systems) manage such dependencies as part of their core function, eliminating the need to write programs do accomplish the task. 

Conceptually, variables or data collections are defined by assignment expressions such as ''A<--(B + C)'', wherein ''A'' automatically reacts to changes in ''B'' or ''C'' at some future time.  Once the relationship between data elements is established re-calculation of values for ''A'' happens automatically as a reaction to ''B'' or ''C''.  Furthermore, if ''A'' itself is used in further assignment to declare a reactive relationship, for example ''D<--(A)'' then changes to ''B'' and ''C'' are automatically propagated to ''D''.  In mathematics this is known as a [[wp>transitive relationship]].

By contrast, in imperative (procedural) programming ''A = B + C'' is a finite assignment statement, wherein ''A'' is set once, when the assignment statement executes. Subsequent changes to ''B'' or ''C'' do not affect ''A'', unless you specifically re-calculate ''A'' by executing the statement again. Programmers are responsible for understanding and managing these dependencies as well as relationships between data elements.

At the heart of the [[whats_app|RDP]] is a reactive data engine that provides a language for expressing reactive relationships as well as networking facilities for making sure that changes to data are propagated in a reliable fashion. The engine allows users to raise and process events and takes care of managing dependencies and relationships between variables or data collections.  Events are propagated to participating components in an ordered fashion, allowing users to declare reactive relationships between service components, data and application logic through the use of [[#event handlers]], [[#RPL functions|functions]] and [[#event triggers]]. Event structure may be declared by using java script-like syntax or imported from Java objects, XML and JSON sources.

Collectively the environment for defining reactive logic is referred to as Reactive Programming Language Script or simply RPL Script ((Pronounced Ripple-Script by those in the know)).  The application fabric makes implicit use of the so-called [[basic_concepts#Message Passing Interface]], a form of messaging that allows many participant applications to simultaneously exchange complex data structures (ie. documents) in a location-transparent fashion. Reactive programming has principal similarities with the [[wp>observer pattern]] commonly used in [[wp>object-oriented programming]].  However, the application fabric integrates data flow and procedural concepts into the programming language making it easy for users to define implicit observers (such as event triggers or event handlers).  This increases the granularity of the data flow and simplifies parallel computing, resulting in so-called [[wp>implicit parallelism]] which allows a programming language to automatically exploit parallel computing capabilities of networked systems without writing specific code to do so.

Unlike traditional computing languages developed for expressing logic and relationships between language constructs (such as variables or data elements), RPL Script is a distributed computing language.  It is used to express logic and relationships between networked components //and// their data elements. RPL Script does not require compilation and may be extended through the use of DSL, importing of Java classes or through user-defined functions.  Results of computations may be easily passed to components running on different machines by raising of events.  

For example, define a collection of ''Tasks'':

<code dsql>
  /** create a Task object */
  create data object Task (string name, int id, big-decimal data1, string data2) ..
  // create a Task Queue 
  create queue [my.TaskList] constrained by Task;
  ..
</code>

Then process the ''Tasks'':

<code dsql> 
  ..
  /**
   * Loop thru the queue taking each Task element and
   * raising it as an event for processing by other nodes..
   */
  while((select count(*) from [my.TaskList]) > 0)
    {
    Task task = take from [my.TaskList];
    DataEvent e = new [event.task];
    e.data = task;
    e.groupKey = task.type;
    raise event e;
    }
  ..  
</code>   

The result of such logic is an event stream of type ''Task'' raised on event id ''event.task''.  It is assumed that an event prototype for ''event.task'' has been previously defined and its payload (data content) is of a type ''Task''.  The reactive data model allows users to define multiple event processors (actors or event-aware data collections) that can potentially subscribe to specific sub-sets of events. For example:

<code dsql>
  create event queue [Tasks.Priority] 
     constrained by [event.task]
       when groupKey = 'High Priority';
</code>

This creates an event queue that automatically subscribes to ''High Priority'' tasks.  In the example above, priority it put into the ''groupKey'' property of the event and derived from the ''Task'' object.  It is assumed that ''Tasks'' are put on the queue by another application that allows users to specify priority.  

Alternatively an event actor for processing ''Low Priority'' events may be defined elsewhere in the application fabric and added to the system later:

<code dsql>
  create actor LowPriorityTaskHandler
      on [event.task] as TaskEvent
         when (groupKey = 'Low Priority')
           for any event
             {
               /** some logic here */
                ..
             }   
</code>
  
The above examples capture the essence of a reactive programming and also illustrate reactive data processing thru the use of Event Queues. Producers are completely de-coupled from consumers, yet their interfaces and associate data structures are well-known.  Declarations such as ''constrained by'' and ''on [event.task]'' enforce type safety and make sure that actors process events of specific structure.  Event consumers (actors) are developed independent of producers (event sources) allowing for easy versioning or re-combination. Actors (as well as triggers or event-aware collections) are designed to process (react-to) events in a location-transparent fashion, allowing system designers to easily change out front-end applications or back-end components without disrupting the existing system. 
  
Note RPL script's ability to easily mix data assignment, query language and event processing syntax.  The ''while loop'' block makes it possible to turn any data element into an event that can be seen and processed by one or more services, actors or data collections such as an [[#Event Queue]] or [[#Event Table]].  Of course ''my.TaskList'', the [[#Blocking Queue]] that holds the initial list of tasks may itself be easily substituted for an event queue, allowing it to accept events from other applications and forming a more complex event flow. 

StreamScape's reactive data environment allows users to easily implement a variety of parallel processing models such as [[wp>Map Reduce]], [[wp>Fork-join model|Fork-Join]] and [[wp>Sharding]]; while offering the flexibility of stateful or stateless event processing components.  

Reactive programming (RP) is a technique that focuses on developing so-called actors; bits of procedural logic that are triggered by arrival of events from the network or an observed component.  Although there has been [[http://developers-beta.slashdot.org/story/14/02/18/1938215/can-reactive-programming-handle-complexity|much debate in the developer community]] regarding RP as a programming pattern for low level languages such as Java, C and even SQL Triggers, little has been said about the value of reactive data as an architecture or a general computing principle.  Unfortunate as it seems, folks are missing the point.  The reactive paradigm is not offering an alternative [[wp>Shopping cart software|shopping cart]] implementation, but rather addresses new issues in system design that have surfaced with the advent of [[wp>Cloud Computing]] and the [[wp>Software as a Service|SaaS]] model.  More information on the benefits of reactive programming can be found [[Reactive Programming Concepts| here]].

====Event Triggers====

All application fabric components are capable of generating events. The client applications, web requests, runtime, services as well as data collections all create so-called [[#Actionable Events]]. These are events that are internal to the component that may be acted upon by RPL Script.

To process actionable events the user can define event triggers; bits of procedural script that receive the internal event and are able to act on it.  In [[#Event Fabric]] parlance, actionable events are always raised by participants with a LOCAL [[#Event Scope]] Triggers may b
Services and data collections are considered resource components and their events are always raised with local scope as internal, actionable events.  To receive events from a resource, users must define event triggers on a resource component’s actionable events using the fabric’s Event Definition Language (EDL).  Creating triggers allows users to define their own event streams.  Event triggers extend the capability of selectors in the following fashion:

	Event Triggers support implicit groups allowing similar events from multiple sources to be isolated

	Event content may be enriched by editing or adding user-defined Event Properties

	Events may be re-cast with a different event id or scope, raised as Advisories or as Exceptions

	Certain types of triggers allow direct event processing thru the use of Action Script functions

	Developers may register their own Event Trigger types and define domain-specific Action Script syntax

The excerpt below shows general trigger syntax and provides an example of a simple event trigger on an actionable event called event.stock.ticker.  A Publisher trigger is defined on a component that is producing a stream of stock quotes.  The trigger selects only those events whose symbol is IBM and raises the event with an event id of event.stock.ticker.IBM. The example illustrates how easy it is to define a new event stream by using event triggers.


====Event Handlers====

====RPL Functions====

====RPL Script====


====State Coherence====

In application fabric parlance, state coherence refers to synchronization of shared data between participants, so as to provide a consistent view of such shared information between multiple autonomous sites (sysplex nodes).  The service engine implements state coherence at several levels that allows fabric nodes to share state, data and configuration information, allowing it to function as a flexible, unified system.  

====Federated Security====

The federated security model mandates that user credentials, access control and other authorization artifacts are shared across the sysplex, allowing for single sign-on and global user authentication.  The application fabric keeps a synchronized copy of all security credentials by maintaining a replicated security data store.  Although each node manages a local copy of the security store, security information is distributed to all member applications.  Specific concepts pertaining to the federated security model are described in Chapter 2: Security and Authorization.
Shared Configuration
Application fabric nodes maintain an independent configuration cache that is capable of sharing function libraries, configuration artifacts and critical meta-data information with their peers.  A configuration cache makes use of the application fabric’s state coherence engine to synchronize event object prototypes, global configuration elements and to distribute compiled application components.  This dramatically simplifies the change management process allowing for global changes to be applied in a simple, easy to manage fashion.

Shared configuration allows for managed independence between loosely-coupled components, making it possible for services and data applications to be developed in an autonomous fashion without the need to manually synchronize configuration artifacts, interface definitions and application components. Synchronized configuration provides a unified governance and change control mechanism, giving administrators a reliable way to manage the distributed environment.  Chapter 11. Entity Repository provides additional information and examples for working with the shared configuration cache.
Data Replication
One of the most powerful features provided by the fabric’s state coherence engine is replication of information between data collections.  Application Data Spaces™ may be configured to automatically share and synchronize content, potentially in a transacted fashion, allowing users to design and implement distributed and event-driven data models.

Replication allows data space collections with the same model and structure, such as a Table, Queue, Map or File to be created at multiple locations across the sysplex and appear as a single logical entity.   Changes applied at one location will be synchronized with all replicated copies.  Data may be updated at any location within a replicated collection group providing a scalable approach to distributed data modification.  

The current data space version supports state coherence thru the use of replication triggers. For information and examples see Chapter 9: Data Space Replication. 

<html><font color=#990000 size="+1"><b>Application Engine Components</b></font></html>

====Services====

The Service Application Engine allows users to host application logic as Plain Old Java Object (POJO) services.   Developers can also use the open service framework API to develop services that take advantage of the application fabric’s event processing and data storage facilities. 

Services are considered resource components, capable of supporting session based communication, managing data and potentially processing language requests.  Service components may act as daemon processes, running in background and producing events or expose their class methods as service interfaces.  Users may then invoke service logic using a variety of protocols, such as HTTP or TLP and pass parameters to the services in a variety of formats including Java Object, XML and JSON (Java Script Object Notation).

====Service Container Context====

Service components are ‘wrapped’ into the application fabric by the service container context.   Users configure service interfaces simply by exposing class methods as container context event handlers.  Event data content is mapped into method parameters allowing for synchronous (DIRECT) or asynchronous (ASYNC) remote method invocation.  Application fabric tools and API allow developers to query and download interface objects directly from the application engine via Web Browser or the fabric’s interactive scripting facility.  Additional information on the command line interface is available at Chapter 2: Language Environment.

====Application Dataspaces====

====Dataspace Context====

====Virtual Server===

A virtual server is a [[#Tablespace]] abstraction used to represent an external database such as Oracle, Microsoft or IBM.  Virtual servers are defined using a [[#connection factory]] that connects to an external database resource. The server creates and manages a pool of such connections for internal purposes.  Through the server abstraction users can define [[#Virtual Tables]] or [[#Query Tables]] that map to real underlying tables or language requests that return tabular results.

====Virtual Server Context====

The virtual server context, also known as a //connection context// allows users to work with an external database directly.  For example, switching to the context:

<code>
  TNode1> use MSSQLServer.DEV1
  ..
  MSSQLServer.DEV1> sp_tables
  ..
</code>  

allows users to execute SQL queries specific to the back-end database.

====File Server====

<html>
<img src="/dokuwiki/_media/icons_large/bowlerhat-transp.png" alt="Smiley face" height="46" width="46" style="margin-left:-6px;">
<a href="/dokuwiki/start" style="margin-left:-1em; font-weight:bold; color:#990000">Back</a>
</html>