~~CLOSETOC~~
<html><font color=#990000 size="+2"><b>Quick Start with a Data Engine</b></font></html>

{{ :wiki:general_moc:eng-deploy.png?nolink&420}}

Ok, you downloaded the software and installed it.  Whats next?  How do you get going with the technology and what are the moving parts? Where's the so-called reactive data fabric?  Fair enough. 

The[[ whats_app | reactive data fabric ]] is a distributed computing environment that is built up by joining multiple [[ whats_app | data engine ]] instances (nodes) into a unified and interconnected data processing network, also referred to as a sysplex.  It is expected that users will do most of their development and testing using a local node and thereafter [[ basic_concepts#Deploying an Engine | deploy the local instance ]] into an existing sysplex.  Think of the sysplex as a cloud of applications and data.  A local data engine (node) is a small portion of the greater whole that you can configure to do whatever your application needs.  You configure, customize and test your application locally and then push the node into the cloud where it can work as a stand-alone application or interact with other nodes as needed.

The basic steps for starting an engine are:

  * Setup Local Environment
  * Configure $STROOT Variable
  * Create Deployment Descriptor
  * Configure & Test the Engine
  * Deploy the Engine

There are two ways to create and deploy an engine.  If you already have a [[ basic_concepts#Management Nodes | management node]] installed and there is an existing Domain (sysplex) defined, you can connect to the administrative [[ basic_concepts#Access Points | Access Point]], create a new node and use the Check-Out ((A check-out of a node may be done using the command-line interface or via the Workbench UI Tool)) operation.  This will create a local copy of a deployed engine instance that users can work with.  This is a so-called //Node Pull// operation.  It assumes that a user is working within an already established application fabric domain and is covered separately in [[advanced_topics#Pulling (Checking-Out) a Node from the Sysplex | Advanced Topics]].

Alternatively, a user can create a local node for prototyping and development.  This can be done via the Workbench environment or using scripts and the SLANG command-line interface.  Local nodes may be loaded into the Workbench, allowing users to test connectivity, develop components and work with data. Local nodes may then be //Pushed// into a sysplex, started as stand-alone processes or loaded into application containers such as [[wp> Apache Tomcat]], an [[wp> Application Server]], [[wp> Spring Framework | Spring]] or a plain Java application.

The application engine is designed to be light-weight, portable and embed-able. Although the application fabric supports advanced security capabilities such as pass-thru authentication, delegate authority and replicated credentials, these features may be configured post-deployment once all critical development has stopped. An engine instance may be configured and deployed in a local environment and later easily re-packaged and deployed into a secure cloud environment behind a firewall.  The goal is to allow users to model the dataspace and test business logic without complex security restrictions getting in the way (unlike traditional database and application server environments). This Quick Start will guide you through the steps necessary to configure and develop a local application engine instance.  If an engine is deployed  as an [[basic_concepts#Embedded Runtime | embedded runtime]], developers may start their application or container and have it //JOIN// into the sysplex or a test cluster much in the same way as described here.

<WRAP round info>
You will need Java JDK Version 1.8 or higher to run all StreamScape platform components.  Java is __not__ part of the product distribution.  The application engine supports Java Standard Edition (SE), the Java Runtime (JRE) or the full Java Enterprise Edition (J2EE) from [[http://www.oracle.com/java | Oracle]].  There are no compiler or J2EE dependencies.  Please contact StreamScape for special builds that are compatible with non-standard Java versions.

On Windows platforms the installer will check for Java availability and set up the environment.  Use of Microsoft Java is not supported.  We have not yet certified with IBM's Java distribution but there have been some positive results.  Java for Android and Google's Application Engine will likely not support the full product capability due to the way the engine handles Class creation, data serialization and thread management.
</WRAP>
\\ 


====STROOT Environment Variable====

The ''STROOT'' system variable is used to specify the software install directory.  Relative location of binaries and platform archives are based on the location that ''STROOT'' points to.  On the Windows platform the installer will automatically create and populate the system variable.  On other platforms or non-installer distributions this variable must be set manually.

On Linux variables are set using the ''set'' or ''setenv '' command, depending on your shell and may be checked the following way:

<code>
  setenv $STROOT=/opt/streamscape
  ..
  env | grep STROOT
  ..
  STROOT=/opt/streamscape
</code>

The variable should be added to shell script initialization files such as <color gray>**.profile**</color> or the <color gray>**.bash_profile**</color> and should always be available when running any StreamScape components.

Windows users may check the variable by going to ''System>>Control Panel>>System>>Advanced system settings'' and clicking the  ''Environment Variables'' button or by opening a Windows Shell (DOS) window and setting the variable in the following way:

<code>
  set %STROOT%=C:\StreamScape
  ..
  set
  ..
  STROOT=C:\StreamScape
</code>

Note that Windows does not allow you to set a variable to automatically re-initialize by default via the command shell.  The best way to do this is to use the control panel UI and ensure that the variable is defined in the ''System Variables'' section.

<WRAP round tip>
The //STROOT// variable is used by SLANG, Management Nodes, Task Nodes and the Application Workbench.  When not defined, unpredictable behavior may result. Management Nodes employ an internal Activator Process that is launched for the purposes of automated upgrades, log forwarding and launching of Task Nodes.  When the activator bootstraps child processes it uses the //STROOT// variable to determine where to obtain binaries and other artifcts.  Care should be taken to ensure that the variable points to the same version installation as the Management Node so that there are no version discrepancies or other surprises. 

The variable may be changed to point to a different location in order to use different runtime versions.  It should be noted that mixing versions of runtime and/or client libraries may have unpredictable results and should only be done in compliance with certified and supported platform versions.  Please consult build-specific [[release_notes | release notes]] to verify library compatibility.
</WRAP>

\\ 
====Platform Libraries=====

The Application Engine platform is distributed as a set of Java Archives (JARs) that are independent of Operating System. Components are packaged either as versions of the Runtime, the Client or a Service Pack.  Platform Archives may be found in the ''$STROOT/platform/lib'' directory and have the following functions:

^ Name ^ Description ^ Packaged Runtime ^
| stosflib.jar | Contains the service framework library and all the platform Service Pack components.            | no  |
| mnode.jar    | Contains the Management Node classes that allow an MNODE to be launched using Java tools.       | yes |
| slang.jar    | Contains the SLANG classes that allow the language environment to be launched using Java tools. | yes |
| tnode.jar    | Contains the Task Node classes that allow a TNODE to be launched using Java tools.              | yes |

The platform distribution area also contains the ''$STROOT/platform/ext'' and ''$STROOT/platform/license'' directories that hold copies of embedded libraries and their associate licenses.  These archives are not required by the runtime as they are already embedded.  They are included for Open Source license compliance and as an eye catcher for developers that may be experiencing Java Package conflicts.

\\ 
====SLANG Command Shell====

The application fabric provides a command line interface that allows users to interact with all application fabric components from a single administrative [[basic_concepts#Access Points | Access Point]] using an extensible, session based language environment called SLANG.  The acronym stands for //Semantic Language and Artifact Generator//.  It provides a way for users to call the application fabric’s API functions and work with artifacts such as [[basic_concepts#Deployment Descriptor|Deployment Descriptors]] and [[basic_concepts#Fabric Resource Module| Fabric Resource Modules]].  Configuration, administration, query, command and control of all application fabric resources is possible via SLANG.

The SLANG shell provides a unified interface to StreamScape's administrative commands and language environment, the so-called RPL (([[wp>Reactive Programming]] Language)).  Developers may also define their own language verbs, mapping them to Service Calls or write custom functions for accessing Application Dataspaces™.  This allows [[dsl_tutorial|domain-specific language extensions to be developed]] tailoring the language environment to industry-specific use and adapting RPL taxonomy to the needs of a business.  Verbs and functions may describe operations on Financial Instruments such as Options, Stock Portfolio elements, Hurdle Rate or IRR ((Internal Rate of Return)) models; or just as easily represent a taxonomy for working with Data Center resources such as Devices, Asset Tags and System Alerts.  As the play on words implies: RPL is not a real language, it's slang.  And slang adapts (reacts) to the needs of the speaker.

To start a SLANG session simply go to the ''/bin'' directory and execute the command:

<code>
  cd %STROOT%/bin
  slang
  ..
  slang> 
</code>

Windows environments provide an executable version of SLANG that internally wraps a Java archive.  Alternatively the distribution also provides a shell script version of the command that is compatible with popular emulation environments such as [[ https://www.cygwin.com/ | Cygwin]].  To run the shell version of SLANG:

<code>
  cd $STROOT/bin
  ./slang.sh
  ..
  slang> 
</code>

SLANG may also be started directly as a Java archive, like any other StreamScape component.  JAR versions of platform utilities may be found in the  ''$STROOT/platform/lib'' directory.  They are not OS specific and are listed [[#Platform Libraries|here]].

<code>
  java –jar slang.jar
  ..
  slang>
  ..
</code>

SLANG is a command processor environment that may be launched with parameters that disable interactive prompts, accept input script files and specify a reply file.  This allows users to automate configuration and administration tasks or generate reports by running scripts. A basic example is provided below.  Additional details on interactive scripting and advanced SLANG capabilities are provided in the [[SLANG Tutorial]].

Assuming a script file <color gray>**get_status.slang**</color> has the following commands:

<code dsql>
  echo 'Connecting..'
  connect tlp://192.168.22.11:7055
  Admin
  admin
  echo 'Connected.'
  list nodes
  disconnect
  exit
</code>

The SLANG request to call the script and get results would be:

<code> 
  slang -noprompt -java-console < get_status.slang

  Connecting..

  Connecting to 'tlp://54.209.215.107:7055'...

  Connected.

  Node    Role             Address  Host                          PID
  ------  ---------------  -------  ----------------------------  -----
  MNode1  Management Node  0.0.1.0  ip-172-16-22-10/172.16.22.10  12670
  MNode2  Management Node  0.0.2.0  ip-172-16-22-11/172.16.22.11  24263
  TNode1  Processor Node   0.0.5.0  ip-172-16-22-10/172.16.22.10  14771
  TNode2  Processor Node   0.0.6.0  ip-172-16-22-10/172.16.22.10  13140
  TNode3  Processor Node   0.0.4.0  ip-172-16-22-11/172.16.22.11  24459
  TNode4  Processor Node   0.0.3.0  ip-172-16-22-11/172.16.22.11  24579

  Connection closed.
</code>

To redirect output to a file:

<code>
  slang -noprompt -java-console < get_status.slang > slang.out
</code>

\\ 
===Connecting and Getting Around===

To access the language environment of a particular engine runtime users have to connect to one of the available [[basic_concepts#Access Points]] of the application fabric. For example, to connect to a node using a URL:

<code>
  slang> connect tlp://192.168.22.11:7055
  ..
  user: Admin
  ..
  password: ****
  
  Connecting to '192.168.22.11:7055'...

  Service Event Fabric Runtime
  Release 3.4 Build 50
  Copyright (c) 2007-2014 StreamScape Technologies
  All rights reserved.

  Runtime-3.4 b362, SEF-3.4 b333, SDO-3.4 b122, OMF-3.4 b193
  OSF-3.4 b179, CLI-3.4 b75, Utils-3.4 b173, Repo-3.4 b179
  DS-3.4 b159, HTTP-3.4 b96, XMPP-3.4 b12

  Local host: ODIN (Windows 7, amd64-6.1)

  Java(TM) SE Runtime Environment
  Java HotSpot(TM) 64-Bit Server VM (build 1.7.0_17-b02, mixed mode)
  Oracle Corporation (home: C:\Java\jdk1.7.0_17\jre)

  Dev-Node-1> 

</code>

A successful log-in will return some banner information as well as release and build numbers of the node that you logged into. An access point may be a [[basic_concepts#Task Nodes|task node]] or a [[basic_concepts#Management Nodes|management node]], it does not matter.  Connecting to a single node in the fabric allows users to switch context to //any// other node, implicitly passing-thru to other nodes within the application fabric by proxy.  Such connections are considered [[basic_concepts#Routed Node Access (Proxies)|routed]] and have certain restrictions as to the resources users can see and work with.  See [[basic_concepts#Event Scope| Event Scope and Resource Access]] for additional details. 

{{ :wiki:general_moc:access-points.png?nolink|}}

==Locating Access Points==

An access point is a fancy name for a network listener. To connect to an access point the user needs to know the host name or IP address and a valid port number. SLANG supports standard TCP/IP and the SSL protocol.  For HTTP access users may connect using the Web-based SLANG Tool that is part of the [[Quilt OS]] environment. Web access requires that an HTTP [[basic_concepts#Acceptor]] be defined and started. A valid user id and password are needed to establish an HTTP connection with Basic Authentication. 

To find out which access points are available users can login and get a list of known host and port pairs using the ''LIST ACCESS POINTS'' command.  In this case at least one access point URL must be known ahead of time.  Alternatively users may set up nodes to broadcast their availability via UDP, allowing clients to dynamically discover their locations via the ''DISCOVER'' command.  In this case the SLANG tool will download all visible access points and resolve their names, allowing users to connect to nodes by name instead of a URL. See [[SLANG Tutorial]] for additional details on how to do this.
 
The default access point port is 5000.  Users may change this to suit their needs either by configuring a node's [[basic_concepts#Acceptors|Acceptor]] using the Workbench or they may edit the [[basic_concepts#Acceptors|Acceptor]] object's XML.  Acceptor objects are stored as XML artifacts with an ''XDO'' extension in the node's [[basic_concepts#Runtime Configuration Cache|configuration cache]].  

The configuration cache is a transactional file system managed by the runtime used to store data, security and configuration objects.  Details on the runtime cache may be found [[runtime_configuration_cache|here]].  

In most cases users will not access or modify cache contents by hand.  However certain artifacts essential to engine start-up (such as protocol acceptors) may be easier to edit via file editor as they are used to start an engine instance.  When the runtime is active it locks all configuration files for exclusive access and does not allow direct editing.

==Common Problems with Acceptors==

Acceptors are network listeners.  They use network ports that may be already in use by other programs.  With networked systems this a common occurrence.  It may happen because other applications or servers on the machine are using the port, or it may be that another Application Fabric node is using the same port.  In large configurations where many nodes reside on the same machine it is easy to lose track of which ports are in use.  It is recommended that for medium-to-large-scale installations [[basic_concepts#Management Nodes]] are used to create and launch [[basic_concepts#Task Nodes]] because management nodes automatically assign ports to task node listeners and may be reconfigured to use a specific port range.  See [[Advanced Topics]] for more information on how to achieve this.

Because network services are initialized as part of node start-up it may be necessary to configure acceptors after node start.  As such it may be necessary to modify the acceptor configuration and Restart from the UI or from ''SLANG''. It is not recommended to modify the artifacts by hand.  Command line interface and scripts are the preferred way to do this. However in some situations, such as mass initial deployments it may be practical to distribute the same Acceptor to many nodes.  This is particularly useful with HTTP acceptors that may contain a lot of rules for URL masking and URI Customization.  Acceptor artifacts are located in the ''.tfcache/objects/sys/network/acceptors'' sub-directory of the runtime's working directory.  Each protocol has its own folder.  For example the default TLP acceptor artifact is ''.tfcache/objects/sys/network/acceptors/tlp/TLPAcceptor.Default.xdo''. The engine runtime should be stopped prior to modifying such files by hand.  Configuration artifacts are managed by the runtime and are locked for modification by external programs when the engine is running.

<WRAP round info>
Mis-configuration of acceptors is one of the most common errors that prevents a service engine from starting properly.  When an acceptor's port is in use by another program the runtime aborts acceptor initialization and raises an internal error. ''Default'' acceptors are configured to halt runtime initialization on failure resulting in a -1407 return code.
</WRAP>

To make configuration easier Workbench allows users to load and start an engine with the default TLP acceptor disabled.  This makes it possible to configure an engine's network listeners, re-start or re-deploy an engine into the application fabric without leaving the Workbench environment. 

To disable acceptor start-up:

{{:wiki:general_moc:no-tlp.png?nolink |}}

Note that an you can optionally start the engine in Maintenance Mode in addition to disabling network communications.  In this mode the Service Manager does not initialize.  Any services that are set to ''Autostart'' will not be started.  This may be useful in cases where you only need to start the engine for the purpose of general configuration.  

Starting an engine with networking disabled means that you cannot connect to it via SLANG or external clients.  However you can still execute configuration commands, work with the RPL Script editor and otherwise continue development using the Workbench tools.  You should also be able to use local event producers, consumers and event collections.

Acceptors may be configured to ''Autostart'' and may also be set to abort engine start-up on failure.  It is recommended that the default TLP acceptor be set to ''Autostart'' and ''Abort on Failure'', whereas any other acceptor failures should not cause the engine to abort its start-up.  Acceptors may be re-started by command line or via Workbench.  

Keep in mind that in most cases an application engine will need at least the default TLP acceptor be be active so that the engine can communicate with other nodes, clients and the management framework.  However, there may be exceptional cases when starting an engine with networking disabled is desired.  For example if the engine is embedded into an application server or another Java program that chooses to handle its own network communications, or in situations where leaving an open port presents a security risk. In such situations disabling network access may be desired or even necessary.

Since there is no hard dependency on network capabilities within the runtime, users may disable acceptor access altogether. The app engine's internal messaging system may still be accessed thru standard Java method calls (in-memory) allowing developers to easily integrate the engine's event processing and data fabric capabilities into their applications.  Alternatively engine resources may be accessed in response to Servlet calls (ie. as embedded Apache component), via the Application Fabric's client API or the JDBC interface.  Disabling networking is an implementation-specific decision and is left up to developers and architects.

\\
===On-Line Help===

The SLANG environment provides on-line documentation and samples for all commands across all contexts.  Users may use ''?'' or ''help'' to check syntax and usage.  Auto-completion is invoked by hitting the ''Tab'' key.  Wherever possible the shell provides a list of possible commands or entries that may be used to complete the request.

Help can be used to search for commands by specifying a partial set of key verbs.  For example obtain a list of commands that contain the word ''event'', use:

<sxh sql; gutter: false>
TNode-1> ? event

Context Commands
-----------------------------------
create event cache
create event prototype
describe component events
describe event cache
describe event prototype
describe event prototype properties
drop event cache
drop event prototype
list active events
list event caches
list event flows
list event prototype models
list event prototypes
list events
</sxh>

The help environment also supports tags that identify certain command groups.  This type of search can be more precise as tags are used to group certain types of commands together by function.  For example:

<code>
TNode-1> # event

Commands with 'event' tag
-----------------------------------
create event prototype
describe event prototype
describe event prototype properties
drop event prototype
list event prototype models
list event prototypes
list events
</code>

The SLANG shell supports auto-completion allowing users to preview available verbs or resource identifiers that may be used to complete a given command.  Use the ''Tab'' key to get a list of available verbs and/or identifiers.  For example hitting ''Tab'' after the following command:

<code>
  TestNode> describe consumer TestNode://TSPACE.TEST:
</code>

will implicitly call a ''LIST'' function and return:

<code>
  TestNode://TSPACE.TEST:MyHandler2_LISTENER_1
  TestNode://TSPACE.TEST:MyHandler2_LISTENER_2
  TestNode://TSPACE.TEST:MyHandler_LISTENER_1
  TestNode://TSPACE.TEST:TestHandler3_LISTENER_1
  TestNode://TSPACE.TEST:TestHandler_LISTENER_1
</code>

\\ 
===Using Application Fabric Resources===

The language processor is context sensitive, meaning that the command set changes depending on which fabric components are being used.  For example if the SLANG session’s context is set to the runtime of a given node the language processor will recognize relevant runtime commands for working with the fabric runtime.  When session context switches to a dataspace the processor will recognize commands and DSQL queries for working with data collections based on the type of space you entered.  If a service context is in use the language processor will understand service-related commands as well as any [[dsl_tutorial| DSL extensions]] defined for the service.  

To move between components SLANG shell provides the ''USE'' command:

<code>
  TNode1> use tspace.LabTech
  ..
  TSPACE.LabTech> list collections
  ..
  Type           Name
  -------------  ---------
  VIRTUAL TABLE  clients
  VIRTUAL TABLE  eventlogs
  ..
  TSPACE.LabTech> use ..
  
  TNode1>
</code>

There are three main contexts: the Runtime, the Dataspace and the Service.  Each one has its own set of commands and supports variations of RPL script.  For example the syntax to create [[basic_concepts#Event Triggers]] and work with [[slang_tutorial#Session Variables]] is supported across all contexts.  Users can declare variables and triggers on a variety of events that come from the runtime, services or dataspaces.

For example:

<code>
  TNode1> list dataspaces
  
  Node     Type    Name
  -------  ------  --------
  TNode1   TSPACE  LabTech
  ..
  TNode1> use tspace.LabTech
  ..
  TSPACE.LabTech> string sysid
  OK
  TSPACE.LabTech> int totalCalls
  OK
  ..
</code>  
  
Note the change in prompt to help identify the context that the SLANG session is currently in.  SLANG allows users to define session-local variables as well as session-local table collections.  Session objects only exist for the duration of the session and will be discarded once the user disconnects and the session terminates.  

<code>
  TSPACE.LabTech> set sysid = '101BT'
  TSPACE.LabTech> set totalCalls = 0
  TSPACE.LabTech> set totalCalls = totalCalls + 1
  OK
  TSPACE.LabTech> select @sysid, @totalCalls

  sysid  totalCalls
  -----  ----------
  101BT  1
  ..
  TNode1> use MyService.Main
  ..
  MyService.Main> create event trigger..
   
</code>

Additionally the application fabric supports the [[basic_concepts#Virtual Server]] ''sub-context'' for working with external database connections.  This sub-context allows users to execute query statements in passthru mode using a previously defined [[basic_concepts#Connection Factory]].  For example:

<code sybase>
  TNode1> use SybaseServer.QA1
  OK
  ..
  SybaseServer.QA1> select @@version
  
  version
  -------------------------------------------------------------------------------------------------------------------
  Adaptive Server Enterprise/12.5.0.1/SWR 9981 IR/P/Sun_svr4/OS 5.8/rel12501/1776/32-bit/FBO/Mon Feb 25 23:35:46 2002
  ..
</code>

\\ 
====Deployment Descriptor (DDX)====

Every node, regardless of its role must have a unique Deployment Descriptor.  The descriptor is a configuration object that describes the basic functional configuration of an engine instance.  It provides critical information required for the runtime environment such as ''Node Name'', ''Domain'', ''Security Credentials'' and ''Application Binding'' information.   

Within a sysplex (domain) a node's descriptor must have a unique ''Node Name''.  An application fabric domain does not allow for duplicates and nodes attempting to ''JOIN'' an active domain with a duplicate name will be rejected and shut down. Deployment descriptors are stored as serialized and encrypted Java objects, packaged up inside an ''stdeploy.jar'' file. When starting a node users can specify the location of the JAR file by using the ''–DDX'' parameter or simply include the JAR in the ''CLASSPATH'' of the application.  The fabric runtime will automatically scan the CLASSPATH on startup to determine if a descriptor object is available.  

===Creating a New Descriptor===

Descriptors may be created using the SLANG command shell or thru the authoring tool that is provided as part of the [[basic_concepts#Application Workbench]].  Application engine nodes that are created through the [[basic_concepts#Management Node]] interface are automatically part of an existing domain.  Their associated descriptors are generated and managed by the sysplex.  This is covered separately in [[ Advanced Topics ]].

A ''deployment descriptor'' is a configuration and authorization tool in one. Users may secure the deployment descriptor, using their own password, which prevents descriptors from being edited by unauthorized individuals. Descriptors hold start-up configuration and security settings and may specify the name of an application fabric domain, node name, security credentials, discovery and authentication modules used to start an engine.  Descriptors may be set up to expire allowing users to control how long a given node can remain active and whether the node has dependencies on other components.  This offers developers a flexible licensing, audit and usage control mechanism that may be used with an embedded runtime or SaaS environments. 

<WRAP round info >
Deployment descriptors are generated as JAR files, allowing them to be automatically loaded by the Java environment.  If the archive is placed in the same directory as the Working Directory where a node starts up the JAR will automatically be picked up and loaded.  Specifying the DDX parameter on start-up is not necessary.  This design makes it easy to start a runtime in stand-alone node, as part of an active sysplex where the node is controlled by a Management Node, or as an embedded engine when loaded into a Java program or application server.

Deployment descriptors may be created by the user, or they may be generated by a self-help environment such as a cloud provisioning app.  Alternatively you can generate deployment descriptors and give them out to developers (for example by e-mail), allowing for operational control over development environments.  Since descriptors may be configured to time-out after a certain period of time and organization can use the descriptor as a means of controlling licensing and technology usage.
</WRAP>

===Create Descriptor with SLANG===

In some cases users may wish to manually create a deployment descriptor (ddx).  This may be done from SLANG and requires two separate steps:

  * Generate a runtime context object called ''rtcontext.cdx''
  * Make a deployment descriptor using the context object called ''stdeploy.jar''

To generate the context object ((This action does not require users to connect to a running node)):

<code>
  generate cdx node TestNode @ 'c:/StreamScape/TestNode/deploy'
  ..
</code>

To pass security and basic information to the context:

<code>
  generate cdx node TestNode at 'E:/TMP' set parameters (user = 'Bob', password = '123', vendor = 'StreamScape Tech')
  ..
</code>

Once generated, this object is presented as an editable XML document named ''rtcontext.cdx'' that will be stored at the specified location.  Once the user has edited  the object and set its properties a deployment descriptor can be made:

<code>
  make ddx @ 'c:/StreamScape/TestNode/deploy'
  ..
</code>

This will build an ''stdeploy.jar'' in the specified directory.  Thereafter this file can be edited using the Application Workbench or described using the SLANG environment:

<code>
  describe ddx @ 'C:\StreamScape\deploy\CF4All001'
  
  Property               Value
  ---------------------  ---------------------------------------
  Install Root           C:\StreamScape
  Node                   CF4All001
  User                   Admin
  Domain                 DemoSysplex85.com
  Cluster Enabled        false
  Cluster Name
  Authentication Module  Default
  Discovery Module       Default
  Coherence              true
  Presence               true
  Repository             true
  Force Lock             true
  Unload on Last         false
  Reference Class        com.streamscape.sef.container.Container
  Vendor                 StreamScape Technologies 
  DDX Security           false
  Expiration Time        N/A
</code>  

===Create Descriptor with Application Workbench===

{{:wiki:workbench:ddx_menu.png?nolink&150 |}}

Using the Workbench environment users can create a Deployment Descriptor via an interactive Wizard.  Using the ''Tools'' menu item select the ''Deployment Descriptor'' option and choose the ''Fabric Node'' radio button.  Alternatively users can ''Open'' existing deployment descriptors and save them to a different location with potentially different node names and credentials.  This makes it easy to create new node configurations based on the old ones.

Selecting the ''Fabric Node'' option creates a deployment descriptor for a stand-alone node.  This is the default.  Other options are simply templates that allow you to create a descriptor for embedded engines that can run inside an application server or a 3rd party container (ie. Spring).  Selecting ''Next'' will open the configuration panels, whereas ''Open'' will allow you to load and edit an existing Deployment Descriptor.  If a descriptor has security enabled, users will not be able to open it unless they specify the correct password.

For new descriptors most of the template is already filled in, requiring users to provide several key bits of information, such as ''Domain Name'', the name of the ''Node'', ''User'' and ''Password'' information.  The specified user becomes the ''Master Administrator'' of the node and owns the ''Admins Group''.  By default this user is typically **Admin**. Security information is written into the security object of the engines's runtime cache when the engine initializes for the first time.  If user credentials are changed in the deployment descriptor, the new user becomes ''Master Administrator''.  The security object will have to be re-initialized, resulting in re-assignment of all permissions to the ''Admins Group'' and entitlement information will be lost.  As such, changing user credentials after initialization is not recommended.

For stand-alone development nodes that are not part of an active sysplex default settings should be fine.  By default, all objects are owned by the master administrator.  You will be able to change names and security settings simply by changing the user information.  When a node joins a sysplex, its security object is wiped clean.  ''Users'', ''Groups'' and associate information is replicated from the domain.  As such, care should be taken when planning secure implementations that intend to use ''resource entitlements'' to secure services, dataspaces or connections.  See [[Advanced Topics]] for further information and best practices on how to do this.

{{:wiki:workbench:ddx_wiz2.png?nolink&450 |}}

Each descriptor may be set-up to include a ''Reference Class'' name which is a dependency object without which the runtime will not start.  When the runtime starts it will check the JVM for availability of this class.  If the class is missing the runtime will not start.  This facility allows developers to bind their engine instances to specific applications and potentially to the license managers of such applications, providing a way to package and distribute bundled applications in a secure manner.

The ''Vendor'' string is included for documentation purposes.  It is not used for any purpose but to provide information about the descriptor's author.

''VCard'' enables users to store extended information in accordance with the [[http://tools.ietf.org/html/rfc2426 | VCard  Specification (RFC 2426)]] format.  This data loosely conforms to LDAP and Active Directory format.  This is an optional setting, but it is recommended.  If [[basic_concepts#Delegate Authentication]] is supported, users that give consent for the application fabric to access their delegate profile ((ie. Google, Facebook or LinkedIn Profiles for example)) will have profile information automatically copied into the local VCard.

''Authentication Module'' specifies how the users credentials are verified by the engine.  The ''Default'' setting uses application fabric's ''Security Database'' to perform this task.  If an alternate module name is specified, this module definition must exist in the configuration cache.  Custom authentication modules may be written to authenticate against any sources such as Active Directory, LDAP, Database or user-defined mechanisms.  See [[Advanced Topics]] for further information and best practices on how to do this.  See [[Advanced Topics]] for further information and best practices on how to do this.

''Discovery Module'' specifies how the engine will load its [[basic_concepts#Exchange Directory Table]].  This table contains all the known [[basic_concepts#Links]] of the node used for peer-to-peer communications.  The ''Default'' setting uses a file-based mechanism that also supports UDP-based discovery. This makes it possible to load the directory table from a file or URL, wherein the URL may point to a shared, networked directory table.  This approach is good in most situations. Also note that if the ''DirectoryTable.xdo'' artifact exists in the working directory where the engine is launched, it will be loaded and evaluated when discovery is set to ''Default''.  Management nodes will automatically maintain and distribute this file in an active sysplex.  However, users can also develop their own modules, describing them in the runtime configuration cache.  See [[Advanced Topics]] for further information and best practices on how to do this.

Keep in mind that the DDX ((Deployment Descriptor)) is simply a configuration object used to give the runtime environment an identity and provide security and peer discovery information.  There is no association between a node's repository and it's deployment descriptor, besides security credentials.  This makes it possible to freely clone, migrate or rename nodes at development time.

However, once DDX security is enabled and runtime keys are generated ((Step 5 of the Wizard)) the node's data store, repository and the associate deployment descriptor become linked.  From that point on a given node would only start with that particular deployment descriptor.  Alternatively, if a node has been provisioned using Management Node commands or via the Workbench Topology View UI, the sysplex will, build, assign and manage the DDX.  Deployed nodes will not be able to set any specific descriptor parameters or use their own DDX. See [[Advanced Topics]] for details on how this is handled.

===Create Descriptor from Management Node===

When nodes are created and/or deployed into an existing sysplex the [[basic_concepts#Management Node]] takes care of managing the deployment descriptor. If a node has been created using the management framework it's generated descriptor will be copied to the local desktop upon node check-out.  Users should not change descriptor information as it will be overridden when the node is checked-in.  

When a new node is deployed into an existing sysplex, information in the local descriptor, such as ''Name'' and ''Domain'' is used to generate a sysplex descriptor.  Security information is checked to ensure that the node is authorized for deployment and the supplied credentials are valid in the sysplex.

No further work with the descriptor is required as the management framework takes care of most tasks. This covered in more details in the following track on [[Deploying a Node]] and additional details are available in [[Advanced Options]].

\\

====Creating a New Data Engine (Node)==== 

To create a data engine instance it’s runtime configuration cache must be initialized using a valid deployment descriptor.  Initialization may be performed programmatically, via the Workbench environment or by using the platform-specific ''TNODE'' command, script or its ''java –jar'' equivalent.  For example ''java -jar tnode.jar''.  It is a good practice to initialize the runtime cache first and then make any necessary edits to network settings or memory configuration.  The cache will also be initialized automatically when the engine first starts.

\\
===Create Engine===

To initalize a new node on the Windows platform, use the following command.  It's equivalent Shell version may also be run under Cygwin, Linux or MacOS.  Additional startup and initialization options for Task Node starup can also be obtained by issuing ''tnode -h''.

<code>
  tnode -init -dir C:\StreamScape\nodes\demo -ddx C:\StreamScape\deploy\demo
</code>

On Windows platforms a Task Node or Management Node may also be started as a service.  See the relevant ''stservice.exe'', ''install_mnode.bat'' and ''install_tnode.bat'' scripts.  On Linux platforms the Shell scripts that start StreamScape nodes may be added to standard ''rc.*'' initialization scripts. 

\\
===Create Engine with Workbench===

Users can create a new node by choosing the following menu option:

{{:wiki:workbench:create-node.png?nolink&250 |}}  {{:create-node-2.png?nolink&350 |}}

In the node creation wizard select the location of the deploymement descriptor created in the prior step.  Note that the working directory where the node cache is created and where the node runs may potentially be different from the location of the deployment descriptor.  This setup is not recommended since it reduces portability.
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\

===Create Engine with Management Node===

A new engine may be created by asking 

<code>
  create node TNode-3 -log-broadcast
</code>

This initializes a new node, automatically generates a new deployment descriptor and adds the node to the list of managed nodes.  If a port range is specified, the new node's TLP port will be assigned by the management node and the acceptor will be automatically updated to use the newly assigned port.

Thereafter, the user can stop the running node and perform a checkout operation to create a [[basic_concepts#Fabric Resource Module]] that can be used to initialize a local copy of the same node:

<code>
  stop node
  ....
  checkout node TNode-3 manifest TN3Manifest at 'c:/Streamscape/resources/TNode-3.frm' without data
  ..
</code>

The ''WITHOUT DATA'' hint ensures that dataspace contents are not exported, as there can be a lot of data if collections are ''LOGGED'' or ''PERSISTENT''.

Alternatively the Workbench Topology UI can be used to perform these actions in a guided fashion using Wizards:

<image here>

\\
====Starting the Data Engine====

The engine may be started as a task node using the platform-specific TNODE executable.  This is provided for convenience and maybe useful to distinguish a task node from other Java programs running in the operating system.  Nodes may also be started using the standard java –jar command.  To start a node using the command line tool users may do the following:

<code>
  tnode –start -log –dir C:\StreamScape\nodes\demo -ddx C:\StreamScape\deploy\demo
</code>

Alternatively, users may embed the node into their applications and start the same node programmatically:

<code java>
  // Init runtime..
  RuntimeContext ctx = null;
  ctx = RuntimeContext.getInstance();

  System.out.println("Started Runtime..");
</code>

In this case the parameters to specify the log, start-up directory and deployment descriptor may be passed in as user-defined Java parameters using the –D flag.  The following properties are supported:

^Property                                   ^ Description                     ^
|streamscape.install.root	   	    |StreamScape Install Directory Override |
|streamscape.runtime.startup.dir	    |Node Startup Directory Override        |
|streamscape.runtime.context.name	    |Node Name (Runtime Context) Override   |
|streamscape.runtime.deployment.dir	    |Deployment Descriptor Override         |

Starting a node from custom code has certain advantages, allowing users to perform standard Java debugging of the engine, service code or any Java classes that may be used.  Alternatively the engine can be embedded into other applications or loaded as a JDBC driver into any any supporting application.  This option is covered more extensively [[runtime_as_jdbc_driver|here]].

<WRAP round tip>
Developers may start the runtime in their favorite development environment and perform local, integrated testing and debugging of the engine. The sample code above may be compiled into a simple Java stub with a ''main()'' and then called from within a development environment such as [[http://www.ibm.com/developerworks/eclipse/downloads/|Eclipse IDE]], [[https://netbeans.org/|NetBeans]], [[http://www.jetbrains.com/idea/|IntelliJ Idea]] or any development environment of your choice.

You can use the ''-D'' properties above to specify the location of the deployment descriptor, node name and the location where StreamScape is installed.  Once launched in this fashion the runtime is controlled by the IDE.  Developers can set traces and redirect the Logger stream to a file, event stream or to the IDE console grammatically using the Java API.

If <color gray>**tnode.conf**</color>, <color gray>**tnode.traces**</color> or <color gray>**DirectoryTable.xdo**</color> files exist in the start-up directory, they will be automatically picked up by the runtime and used for configuration or to establish links with other nodes. This can be very useful in situations where shared data objects, packages or other resources need to be replicated to the test node in the IDE from the an active development sysplex.
</WRAP>
\\
===Starting a Data Engine in Workbench===

When a data engine is started using the Application Workbench it spawns a separate JVM that runs separate from the UI tool.  This  allows for rapid prototyping and in-depth testing of multiple nodes local to the machine.  Users may view data engine error logs in real-time, deploy services, develop RPL language constructs like Triggers, Functions and Actors in the local environment, as well as run queries directly from the Application Workbench, allowing for easy unit testing of service logic, process flows and transient data models.  An engine is a standard Java JVM.  Developer tools such as [[http://www.ibm.com/developerworks/eclipse/downloads/|Eclipse IDE]], [[https://netbeans.org/|NetBeans]], [[http://www.jetbrains.com/idea/|IntelliJ Idea]] may attach profilers to the enginefor purpose of debugging or  assist in testing and debugging of service logic or custom Java Function Libraries.

\\
====Stopping the Engine====
 
Stopping the engine may be performed in a variety of ways.  When started interactively by using a platform-specific TNODE executable, the Ctrl+C interrupt will send a stop signal to the executable and trigger its shutdown handler which should result in a controlled shutdown.  Although the direct method may be useful for testing, the preferred and correct way of stopping a runtime environment is thru the use of the SLANG commands or programmatically by connecting to the service engine and issuing a shutdown request, for example:

<code>
  slang -url tlp://localhost:5000 -u Admin -p admin
  ..
  TNode1>
  ..
  TNode1>shutdown
  ..
</code>

The shutdown sequence will suspend all running services, shutting them down in the sequence specified by the Service Manager, close any Dataspace sessions, roll back any transactions that may be in progress, then stop and close the Dataspace store and finally instruct the runtime to downgrade to Run Level 0.  Depending on the number of pending transactions, and the type and complexity of outstanding services, the shutdown sequence may take some time.

During shutdown the Dataspace store performs basic recovery and transaction log maintenance, flushing in-memory data to disk and releasing resources.  By default however, it does not merge outstanding recovery data into the data file.  This results in faster shutdown at the expense of slower startup and it is the default behavior.  In certain cases users may want to reverse this behavior by doing the following:

<code>
  ..
  TNode1>shutdown checkpoint
  ..
</code>

When a data engine is under control of a Management Node the process engines are controlled by the administrative command ''stop processor node <Node Name>''.  Internally the Management Nodes and tools use the SLANG interface to perform shutdown operations.
 
\\
====Engine Runtime Internals====

The fabric runtime JAR struntime.jar is located in the <install_root>/platform/lib directory of the StreamScape installation where <install_root> is the product installation directory, typically specified by the $STROOT environment variable at installation time.  This archive contains the full product distribution and must be included in all application fabric programs that intend to run as fabric nodes.  The runtime Java Archive also contains the classes for the embedded JDBC driver and must be include the class path if the runtime is boot-strapped as an embedded database.
The fabric runtime may be run in three basic modes.  It may be started as within the Task Node container, loaded as a standard JDBC driver by any application that supports the interface or embedded in an application, loaded and used by the parent program.  Depending on the additional components needed by the runtime infrastructure additional Java Archive files may need to be included in the class path or the runtime configuration cache.
Fabric Runtime Modes 

The fabric runtime supports the following models:

  * As an in-process engine embedded in a Java Application
  * As a Fabric Client that can connect to a running Application Engine
  * As a stand-alone Task Node (TNODE) process, launched via platform-specific executable
  * As a managed process, launched by a Management Node (MNODE) 
  * As an embedded JDBC Driver used by a Java Application

===Service Engine Run Levels===

An application engine instance is a full-featured micro-kernel whose architecture is loosely based on the classic UNIX model.  When a service engine initializes in goes thru a series of run levels that define its state before becoming fully functional.  The run levels may be seen by enabling the runtime traces and directing them to a log file.   

It should be noted that the runtime engine is a light-weight, embed-able application platform.  Environment start-up typically takes several seconds to complete, unlike an operating system.  Depending on the recovery steps, service logic and the amount of data held in data space memory this time may increase.  Run level details are provided here for informational and debugging purposes.  They have the following meaning:

==RUN LEVEL 0==

The runtime is in single-threaded, stand-alone mode.  At this stage the deployment descriptor is verified and the environment variables as well as system variables (java –D flag) are evaluated.  System serializers are loaded and the Object Mediation environment for data marshaling is established.   Errors occurring at this level typically are the result of missing or invalid deployment descriptor or due to problems with the class loader sequence  that prohibit the data marshaling and aspects library from initializing properly.

This run level throws -1000 EXCEPTIONS which are fatal and will cause the runtime initialization to fail, forcing a system exit.  Care should be taken with embedded applications as they would need to intercept the exit directive or risk a full application stop.  It is expected that applications embedding the engine will not continue to function if initialization of the environment fails.

^Exception Code ^ Error Condition ^
|--1000 | Deployment Descriptor Artifact Not Found, due to missing artifact or CLASSPATH. |
|--1001 | Archive does not contain a valid Deployment Descriptor Artifact. |
|--1002 | Archive contains an empty Deployment Descriptor Artifact. |
|--1003 | I/O Exception processing Deployment Descriptor Artifact. |
|--1004 | Deployment Descriptor Artifact decryption failed. |
|--1005 | Deployment Descriptor Artifact de-serialization failed. |
|--1006 | Unsupported Context Type in Deployment Descriptor Artifact. |
|--1007 | Runtime Context name is empty or null in Deployment Descriptor Artifact. |
|--1008 | Runtime Validation Exception.  Validation Class not found in Deployment Descriptor. |
|--1009 | Runtime Environment Exception.  STROOT variable not set. |
|--1010 | Invalid Context Name specified in Deployment Descriptor. |
|--1011 | Domain undefined in Deployment Descriptor. |
|--1012 | Runtime Context parameter conflict in Deployment Descriptor. |

==RUN LEVEL 1==

In this run level the runtime has evaluated the deployment descriptor and will attempt to attach to the application persistence cache located in the <node_startup_dir>/.tfcache directory.  The runtime will verify cache content by checking all the relevant artifact files.  As the files are being checked they will be validated.  

Cache entities will be processed by the engine to ensure that a prior shutdown operation did not leave the cache in an inconsistent state.  The entity repository modifies data in a pseudo-transactional fashion using a 2-file I/O approach where appropriate.  Old files are versioned off with a .v extension and new entries are written down as a single I/O operation.  Upon successful write the version file is removed.  In the event of a write failure the new file will not be a usable object.  If the runtime is re-started after failure it will automatically reconcile by rolling back to the safe version of the object contained in the .v file.  In such a case pending changes will be lost.  

This run level throws -1100 EXCEPTIONS which are mostly fatal and may cause the runtime initialization to fail, forcing a system exit.  Care should be taken with embedded applications as they would need to intercept the exit 
directive or risk a full application stop.  It is expected that applications embedding the engine will not continue to function if initialization of the environment fails.

^Exception Code ^ Error Condition ^
|--1100 | Error Auto-binding Repository. |
|--1101 | Illegal State when Auto-binding Repository. |
|--1102 | Interrupted Exception when Auto-binding Repository. |
|--1103 | General Error Auto-binding Repository. |

==RUN LEVEL 2==

This run level initializes the critical factory objects for the manager bean.  The scheduler and factory stores are created and the runtime singleton initialization completes.  The engine becomes multi-threaded at this stage.

This run level throws ''-1200 EXCEPTIONS'' which may be fatal.  However, due to the nature of initializations at this phase it is unlikely that a failure will occur unless the system is running low on memory.

==RUN LEVEL 3==

This run level initializes the user-defined semantic types, security manager, event datagram factories, and discovery module. This allows the runtime to load all necessary configuration objects to advance to multi-user mode. The runtime also loads the acceptor factory and creates a Fabric Exchange for the node.  All subordinate components that are dispatcher-based, the session manager, trigger manager and all public logging facilities are then started.  

The runtime is now in multi-threaded, multi-user mode, capable of accepting network connections and peer links.  It should be noted that at this point the service manager and related components have not yet started.  As such, it is possible that applications waiting to connect to the node may be able to connect, but not see the services and data spaces until those complete their initialization.

The runtime now creates an entity repository accessor and makes the repository available for general usage.  All critical repository artifacts are checked and loaded at this stage.  When the entity repository module works with its cache entities it creates an in-memory registry that holds all the relevant configuration objects.  The objects are persisted to disk by being serialized into their XML form, allowing for emergency editing of the artifacts if necessary.

It should be noted, that all configuration files are locked by the runtime at start-up, making direct manual editing impossible.  Artifacts may be added to the configuration cache by simply being placed into the correct directory.  The cache is a live entity and new objects will be automatically validated and rejected if they are not real serialized entities.

As part of entity checking the runtime attempts to marshal the configuration artifacts and load them into memory.  If this fails the artifact is removed from the cache and placed in the <node_startup_dir>/.junk directory.  This check is performed with all semantic type, service configuration, event prototype and factory objects. 

<WRAP round info >
When the node is part of a sysplex, validations performed at Run Level 3 may be overwritten by the coherence engine that initializes in subsequent run levels.  Sysplex nodes are less likely to fail at this phase since their content is typically synchronized with the root node. 
</WRAP>

This run level throws -1300 EXCEPTIONS which are non-fatal. It is expected that a runtime that made it into this phase has no system problems but may potentially have issues with its network configuration or with the user-defined artifacts.  Semantic type and prototype resolution are considered soft faults and will not prevent the runtime from starting.  In rare circumstances, if system factory or prototype errors are encountered (often as a result of an upgrade that did not complete properly), the engine may halt startup and force and exit. 

One of the most common exceptions being -1306 EXCEPTION due to a network port that is already in use.  The fabric provides optional settings that allow this error to be treated as fatal or non-fatal.   The following, potentially fatal errors may result at this stage:

^Exception Code ^ Error Condition ^
|--1300 | System Semantic Type Errors. |
|--1301 | Security Manager Errors. |
|--1302 | Event Datagram Factory Errors. |
|--1303 | Management Node Factory Errors. |
|--1304 | Discovery Module Errors. |
|--1305 | Network Acceptor Manager Errors. |
|--1306 | Fabric Exchange Initialization Errors. |
|--1307 | Repository Accessor Errors. |
|--1308 | Fabric Exchange Startup Errors. |	
|--1309 | Runtime Fabric Event Dispatcher Errors. |
|--1310 | Runtime Session Manager Errors. |
|--1311 | Runtime Advisory Listener Errors. |
|--1312 | Node TLP Acceptor Startup Errors. |
|--1313 | Event Trigger Manager Errors. |	

==RUN LEVEL 4==

This run level initializes the ''Statistics Monitor'' and ''Language Processor'' environment for the runtime, creates the ''Event Identity Plugin Manager'', ''Runtime Package Registry'', Service Manager, Data Space Manager and Coherence Agent.

At this stage the shutdown hooks for the runtime are registered and the node is ''JOINED'' to the application fabric if instructed to do so via the ''Discovery Module''.  This run level throws ''-1400 EXCEPTIONS'' which are non-fatal and should not prohibit the start-up and operations of the runtime.  Certain components may, however fail typically as a result of improper configuration.

^Exception Code ^ Error Condition ^
|--1401  | Runtime DSL Processor Errors. |
|--1402  | Runtime EIM Plugin Manager Errors. |
|--1403  | Runtime Package Registry Errors. |
|--1404  | Runtime Package Registry Errors. |
|--1405  | Dataspace Manager Errors. |
|--1406  | Coherence Agent Errors. |
|--1407  | Runtime Acceptor Start-up Errors. |
|--1408  | Fabric Join Errors. |

\\

<html>
<img src="/dokuwiki/_media/icons_large/bowlerhat-transp.png" alt="Smiley face" height="46" width="46" style="margin-left:-6px;">
<a href="/dokuwiki/start" style="margin-left:-1em; font-weight:bold; color:#990000">Back</a>
</html>