User’s Guide

Introduction

stpy library consist of two modules ds and fabric:
  • fabric module provides connection to StreamScape node thru HTTP that provides access to Dataspace, Services and Exchange.
  • ds is a thin Python wrapper around fabric api. It is compatible with the Python DB API interface (version 2).

Compatibilities

The current stpy implementation compatible with:
  • Python 2 version from 2.7 or Python 3 version from 3.5
  • StreamScape runtime version from 3.6

Installation

To install stpy from source distribution use the command:

pip install stpy-1.0.0.tar.gz

Verify installation:

pip list

Output should contain line like:

stpy (1.0.0)

If pip is not installed unpack the archive, go to the unpacked directory and run the following command:

python setup.py install

Once package is installed it will be available for import in python interpreter.

Logging and Traces

Python logging module is used for logging and traces.

Sample configuration:

import logging
# create logger with 'stpy'
loggerconf = logging.getLogger('stpy')
loggerconf.setLevel(logging.DEBUG)
# create file handler which logs debug messages
fh = logging.FileHandler('stpy.log')
fh.setLevel(logging.DEBUG)
# create console handler with error log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
loggerconf.addHandler(fh)
loggerconf.addHandler(ch)

stpy.fabric

fabric module provides connection to StreamScape node thru HTTP that provides access to Dataspace, Services and Exchange.

Date and Time types

Dataspace supports the following date and time types::
  • date - represents date and time, mapped to java.util.Date
  • SqlDate - represents date, mapped to java.sql.Date
  • SqlTime - represents time, mapped to java.sql.Time
  • SqlTimestamp - represents timestamp, mapped to java.sql.Timestamp
Values of these types can be passed in the following way:
  • date
    • as milliseconds in UTC since 1970-01-01
    • as datetime object datetime(2017, 01, 15, 12, 17, 20), may be in any timezone, default timezone is UTC
  • SqlDate
    • as string “2017-01-15”
    • as date object date(2017, 01, 15)
  • SqlTime
    • as string “13:17:20”
    • as time object time(13, 17, 20)
  • SqlTimestamp
    • as string “2017-01-15 13:17:20.123” should be in dataspace zone
    • as milliseconds in UTC since 1970-01-01
    • as datetime object datetime(2017, 01, 15, 12, 17, 20), may be in any timezone, default timezone is UTC

All values of date and SqlTimestamp returned in UTC timezone.

Fetch size

By default full select result row set is downloaded to client. DataspaceAccessor.setFetchSize(size) method can be used to set another fetch size value. A value of 0 of negative one means fetch all rows in the first request.

LOB objects

Dataspace supports the following types of large object::
  • BLOB - binary large object
  • CLOB - character large object
  • FLOB - file large object

Dataspace LOBs(large objects) returned to client as objects with interface of Clob or Blob depending on type of lob. See modules documentation to know more about these interface’s methods. Maximum size of lob data that will be downloaded at first time can be set with method HTTPDataspaceAccessor.setDownloadableLobSize(size). Remaining lob data can be accessed with corresponding LOB methods. Value -1 means that all data should be downloaded in first result.

Samples

Connecting to the node

Create and open HTTPFabricConnection:

from stpy import HTTPFabricConnection
connection = HTTPFabricConnection("http://localhost:8888", "Admin", "Admin")
connection.open()

Check connection:

>>> connection.isOpened()
True

>>> connection.ping()
u'AVAILABLE'

Close connection:

connection.close()

>>> connection.isOpened()
False
>>> connection.ping()
'UNAVAILABLE'

Dataspace access

At first create dataspace:

session = connection.createSLSession()
session.slangRequest("create dataspace pythonds type TSPACE")
session.close()

Create dataspace accessor:

from stpy import DataspaceType
accessor = connection.createDataspaceAccessor(None, DataspaceType.TSPACE, "pythonds")

Check dataspace accessor:

>>> accessor.isOpened()
True
>>> accessor.isAvailable()
True

Create table:

accessor.executeQuery("create persistent table accounts(id int, name string, followers long, primary key(id))")

Insert rows into the table:

accessor.executeQuery("insert into accounts values(1, 'Bill', 80)")
accessor.executeQuery("insert into accounts values(2, 'Steve', 10000)")
accessor.executeQuery("insert into accounts values(3, 'Mark', 900)")
accessor.executeQuery("insert into accounts values(4, 'Mike', 12000)")
accessor.executeQuery("insert into accounts values(5, 'Dave', 200)")

Execute select query:

min_followers = 100
result = accessor.executeQuery("select name, followers from accounts where followers > ?", [min_followers])

Iterating over result:

>>> result.next()
True
>>> result.getCurrentRow().getColumnByName("name")
'Steve'
>>> result.getCurrentRow().getColumnByIndex(1)
10000L
>>> result.next()
True
>>> result.getCurrentRow().getColumnByName("name")
'Mark'
>>> result.getCurrentRow().getColumnByIndex(1)
900
>>> result.next()
True
>>> result.next()
True
>>> result.next()
False

Close result:

result.close()

Close accessor:

accessor.close()

Operations with complex objects and semantic types

At first create sample SDO in our node using slang session:

session = connection.createSLSession()

session.slangRequest("create sdo EmployeeAddress as {string city; string zipcode;} \
      namespace stpy.sample \
      replace archive stpysample \
      replace package stpysample")

session.slangRequest("create sdo Employee as {string name; EmployeeAddress address;} \
      namespace stpy.sample \
      replace archive stpysample \
      replace package stpysample")

Create collection to store created SDO’s objects:

from stpy import DataspaceType
accessor = connection.createDataspaceAccessor(None, DataspaceType.TSPACE, "pythonds")

accessor.executeQuery("create persistent table employees(id int, employee Employee, primary key(id))")

Complex object can be inserted into collection by the following ways:

  • create and fill object in accessor session and then insert into table in the same session:

    accessor.executeQuery("Employee employee = new Employee()")
    accessor.executeQuery("employee.name = 'Steve'")
    accessor.executeQuery("employee.address = new EmployeeAddress()")
    accessor.executeQuery("employee.address.city = 'NY'")
    accessor.executeQuery("employee.address.zipcode = '10001'")
    
    accessor.executeQuery("insert into employees values (1, employee)")
    
  • create and fill object in dictionary and then insert into table:

    employeeMap = { \
          '@type' : 'Employee', \
          'name' : 'Mike', \
          'address' : { \
                '@type' : 'EmployeeAddress', \
                'city' : 'NY', \
                'zipcode' : '10005' \
           } \
    }
    
    accessor.executeQuery("insert into employees values (2, ?)", [employeeMap])
    
  • import semantic type, create and fill object on the client and then insert into table:

    connection.importSemanticType('Employee')
    employee = connection.getTypeFactory().instantiateType('Employee');
    employee.name = 'Bill'
    employee.address = connection.getTypeFactory().instantiateType('EmployeeAddress');
    employee.address.city = 'LA'
    employee.address.state = '90001'
    
    accessor.executeQuery("insert into employees values (3, ?)", [employee])
    
Select query returns complex object as:
  • dictionary - if corresponding semantic type not imported
  • deserialized object - if corresponding semantic type is imported

We have already imported Employee semantic type on previous step, so it will be returned as deserialized object:

rowSet = accessor.executeQuery("select * from employees")
>> rowSet.next()
True
>> rowSet.getCurrentRow().getColumnByName('id')
1
>> e = rowSet.getCurrentRow().getColumnByName('employee')
>> e.name
u'Steve'
>> e.address.city
u'NY'

>> rowSet.next()
True
>> rowSet.getCurrentRow().getColumnByName('id')
2
>> e = rowSet.getCurrentRow().getColumnByName('employee')
>> e.name
u'Mike'
>> e.address.city
u'NY'

Operations with LOBs

Create table with LOB columns:

accessor.executeQuery("create persistent table LobsTable(id int identity, b Blob, c Clob)")

Insert lob data:

accessor.executeQuery("insert into LobsTable values(1, ?, ?)", [bytearray([1,2,3, 4, 5]), 'qwerty'])
accessor.executeQuery("insert into LobsTable values(2, ?, ?)", [bytearray([10,11,12,13,14,15,16,17,18,19,20]), 'qwerty12345678'])

LOB column values are returned as objects with interfaces Clob or Blob, depending of lob type.

By default full lob data are downloaded in first fetch. This can be changed with method HTTPDataspaceAccessor.setDownloadableLobSize(size) If lob size is greater than downloadable lob size then lob proxy will be returned and lob data will be downloaded by request.

Set downloadable size to 7:

accessor.setDownloadableLobSize(7)

Select lobs:

rowSet = accessor.executeQuery("select * from LobsTable")

>> rowSet.next()
True
>> row = rowSet.getCurrentRow()
>> row.getColumnByName('id')
1
>> b = row.getColumnByName('b')   # return BlobImpl because size is lower than downloadable size
>> c = row.getColumnByName('c')

>> b.length()
5
>> b.get_all_bytes()
bytearray(b'\x01\x02\x03\x04\x05')
>> b.get_bytes(1,2)
bytearray(b'\x02\x03')

>> c.length()
3
>> c.get_all_string()
u'qwe'
>> c.get_sub_string(1,2)
u'we'

>> rowSet.next()
True
>> row = rowSet.getCurrentRow()
>> row.getColumnByName('id')
2
>> b = row.getColumnByName('b')   # return RowSetBlobProxy because size is greater than downloadable size
>> c = row.getColumnByName('c')

>> b.length()
11
>> b.get_bytes(9,2)
bytearray(b'\x13\x14')
>> b.get_all_bytes()
bytearray(b'\n\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14')

>> c.length()
14
>> c.get_sub_string(8,3)
u'345'
>> c.get_all_string()
u'qwerty12345678'

Date and Time operations

There are 3 date/time types in dataspace:
  • SqlDate - only date
  • SqlTime - only time
  • SqlTimestamp - date and time

Create table with date and time columns:

accessor.executeQuery("create persistent table DateTimeTable(id int identity, sDate SqlDate, sTime SqlTime, sTimestamp SqlTimestamp)")

Date and time values can be inserted in different ways:

# 1. using string representation
#    in this case timestamp should be in timezone of dataspace
accessor.executeQuery("insert into DateTimeTable values(1, '2017-02-28', '10:11:12', '2017-02-28 10:11:12.123')")

# 2. using date, time python objects
#    in this case timezone can be in any timezone, by default UTC
from datetime import datetime, date, time

# timestamp in UTC timezone
accessor.executeQuery("insert into DateTimeTable values(2, ?, ?, ?)", [date(2017, 2, 28), time(13, 17, 20), datetime(2017, 2, 28, 12, 17, 20)])

# timestamp in local timezone
import pytz # $ pip install pytz
from tzlocal import get_localzone # $ pip install tzlocal
sTimestampLocal = get_localzone().localize(datetime(2017, 2, 28, 12, 17, 20))
accessor.executeQuery("insert into DateTimeTable values(3, ?, ?, ?)", [date(2017, 2, 28), time(13, 17, 20), sTimestampLocal])

Select data:

rowSet = accessor.executeQuery("select * from DateTimeTable")
>> rowSet.next()
True
>>> row = rowSet.getCurrentRow()
>> row.getColumnByName("id")
1
>> row.getColumnByName("sDate")
datetime.date(2017, 2, 28)
>>> row.getColumnByName("sTime")
datetime.time(10, 11, 12)
>>> row.getColumnByName("sTimestamp") # result in UTC depends on dataspace timezone
datetime.datetime(2017, 2, 28, 7, 11, 12, 123000, tzinfo=<UTC>)

>>> rowSet.next()
True
>>> row = rowSet.getCurrentRow()
>>> row.getColumnByName("id")
2
>>> row.getColumnByName("sDate")
datetime.date(2017, 2, 28)
>>> row.getColumnByName("sTime")
datetime.time(13, 17, 20)
>>> row.getColumnByName("sTimestamp") # result in UTC, the same as was passed to insert
datetime.datetime(2017, 2, 28, 12, 17, 20, tzinfo=<UTC>)

>>> rowSet.next()
True
>>> row = rowSet.getCurrentRow()
>>> row.getColumnByName("id")
3
>>> row.getColumnByName("sTimestamp") # # result in UTC, value passed to insert converted to UTC timezone
datetime.datetime(2017, 2, 28, 9, 17, 20, tzinfo=<UTC>)

Slang access

Creating slang session:

session = connection.createSLSession()

Executing query:

response = session.slangRequest("list nodes")

Checking response:

>>> response.isOK
True
>>> response.rowSet.next()
True
>>> response.rowSet.getCurrentRow().getColumnByIndex(0)
'TestNode1'
>>> response.rowSet.getCurrentRow().getColumns()
['TestNode1', 'Processor Node', '0.0.1.0', 'retina/127.0.0.1', '20410', '3.6.001.22']

Closing session:

session.close()

Services access

Not yet implemented

Exchange access

Not yet implemented

stpy.ds

ds module is a thin Python wrapper around fabric api. It is compatible with the Python DB API interface (version 2).

Functions and attributes

Only a few top-level functions and attributes are defined within stpy.ds:

connect(parameters...)

Constructor for creating a connection to the database. Returns a Connection Object. Parameters:

url
URL of the node to connect to in format http[s]://<host>:<port>
dataspace
dataspace to connect to in format <DataspaceType>.<DataspaceName>
username
user to connect as
password
password to use
autocommit
If True (default), autocommit is enabled. If False, autocommit is disabled. If None, autocommit isn’t set and server default is used.
timeout
http timeout in seconds on all operations
ssl

dictionary with SSL connection parameters:

cert_reqs - specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided.
Should be one of the following values::
CERT_NONE or None - certificates ignored CERT_OPTIONAL - not required, but validated if provided CERT_REQUIRED - required and validated

If the value of this parameter is not CERT_NONE, then the ca_certs parameter must point to a file of CA certificates.

ca_certs - file contains a set of concatenated ‘certification authority’ certificates, which are used to validate
certificates passed from the other end of the connection.
ca_cert_dir - a directory containing CA certificates in multiple separate files, as supported by OpenSSL’s -CApath flag or
the capath argument to SSLContext.load_verify_locations().

ssl_version - specifies which version of the SSL protocol to use. Optional. key_file and cert_file - optional files which contain a certificate to be used to identify the local side of the connection disable_warnings - specifies disable or not InsecureRequestWarning warning, by default True.

apilevel
String constant stating the supported DB API level. ‘2.0’
threadsafety

Integer constant stating the level of thread safety the interface supports. This is set to 1, which means: Threads may share the module.

All cursors created on one connection use the same dataspace session, so cannot be used simultaneously by multiply threads.

paramstyle

String constant stating the type of parameter marker formatting expected by the interface. For now qmark format is only supported. So any changes to that field will not have any effect.

qmark format means that question mark(?) should be used for parameter placeholders. Parameter placeholders can only be used to insert column values. They can not be used for other parts of SQL, such as table names, statements, etc.

Connection Objects

Connection objects are returned by the connect() function.

commit()
Commits any pending transaction to the database. If there is no any started transaction this method successfully does nothing.
rollback()
Rollback dataspace transaction if any started. If autocommit is enabled raises exception.
cursor()
Return a new Cursor Object using the connection.

There are many more methods defined on the connection object. For more information on them see model pydoc documentation.

Cursor Objects

callproc(funcname, args)

Call a stored dataspace function with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The operation uses .arraysize value as fetch size to fetch rows from server.

Function returns passed parameters. Function results can be accessed through the standard .fetch*() methods.

close()
Closes the cursor. Future operations raise ProgrammingError. If you are using fetch size, it is very important to close the cursor when you are done with it and before creating a new one.
.description

This read-only attribute is a sequence of 7-item sequences. Each of these sequences contains information describing one result column:

name
type_code
display_size
internal_size
precision
scale
null_ok

The first two items ( name and type_code ) are mandatory, the other five are optional and are set to None if no meaningful values can be provided.

This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the .execute*() method yet.

The type_code can be interpreted by comparing it to the Type Objects specified in the section below.

.rowcount

This read-only attribute specifies the number of rows that the last .execute*() produced (for DQL statements like SELECT ) or affected (for DML statements like UPDATE or INSERT ).

The attribute is None in case no .execute*() has been performed on the cursor or the rowcount of the last operation cannot be determined by the interface.

.arraysize
Defines the fetch size used to fetch rows from server, by default one row. Also defines default number of rows fetchmany() will fetch. This value should be set before callproc() or execute() call. A value of 0 or -1 means fetch all rows in first call.

There are many more methods defined on the cursor object. For more information on them see model pydoc documentation.

Types

Returned primitive query’s values are converted to the following types:

  • STRING
  • BINARY
  • NUMBER
  • DATE
  • TIME
  • DATETIME

Complex values are deserialized from json string to dict. Then if corresponding semantic type was imported dict converted to object of imported semantic type. Also user can register self type converters. In this case dict can be converted to user object using user-defined converter.

Samples

Basic operations

At first you need to connect to dataspace:

from stpy import ds
db=ds.connect(url="http://localhost:8888", dataspace="tspace.pythonds",
username="Admin", password="Admin", timeout=30, autocommit=True)

To perform a query, you first need a cursor, and then you can execute queries on it:

c=db.cursor()

Create table if it doesn’t exist:

Create table:

c.execute("create persistent table accounts(id int, name string, followers long, primary key(id))")

Set .arraysize to 0, that means fetch all rows at first call:

c.arraysize = 0

Insert rows into the table:

c.execute("insert into accounts values(1, 'Bill', 80)")
c.execute("insert into accounts values(2, ?, 10000)", ('Steve',))  # using question mark formatting
c.execute("insert into accounts values(3, ?, ?)", ['Mark', 900])  # using question mark formatting
c.execute("insert into accounts values(4, 'Mike', 12000)")
c.execute("insert into accounts values(5, 'Dave', 200)")

Select from table:

min_followers=200
c.execute("select name, followers from accounts where followers >= ?", (min_followers,))

This examples shows example of select query with question mark formatting. “where followers >= ?” will be formatted to “where followers >= 200”

Parameters for the query should be passed as one tuple. Adding a comma, i.e. (min_followers,) forces it to make a tuple.

Get info about returned row set:

c.description

Get rows count:

c.rowcount

And now, fetch the results:

>>> c.fetchone()
['Steve', 10000L]
>>> c.next()
['Mark', 900L]
>>> c.fetchmany(10)
[['Mike', 12000L], ['Dave', 200L]]

Scroll the cursor:

>>> c.scroll(0, "absolute")
['Steve', 10000L]
>>> c.fetchmany(10)
[['Mark', 900L], ['Mike', 12000L], ['Dave', 200L]]
>>> c.next()
None

If you want more rows, you could use c.fetchmany(n) or c.fetchall(). On c.fetchmany(n), the n is optional and defaults to c.arraysize, which is normally 1. Both of these methods return a sequence of rows, or an empty sequence if there are no more rows.

Note that in contrast to the above, c.fetchone() returns None when there are no more rows to fetch.

Note that .arraysize attribute defines fetch size that defines the maximum number of rows that can be transferred from server to client in one response. A value of 0 or negative one means that all rows will be fetched in the first call. If .arraysize > 0 then first response from server will contain maximum .arraysize rows. Next .arraysize c.fetchone() calls will not send requests to the server. And next .arraysize+1 c.fetchone() call will send request to the server and recieves response with maximum .arraysize rows.

Operations with complex objects and semantic types

At first we need to create sample SDO, sample dataspace and table. See the corresponding session in fabric samples above.

Then connect to dataspace and create cursor:

from stpy import ds
db=ds.connect(url="http://localhost:8888", dataspace="tspace.pythonds",
username="Admin", password="Admin", timeout=30, autocommit=True)
c=db.cursor()

Create table:

c.execute("create persistent table employees(id int, employee Employee, primary key(id))")

Complex object can be inserted into collection by the following ways:

  • create and fill object in accessor session and then insert into table in the same session:

    c.execute("Employee employee = new Employee()")
    c.execute("employee.name = 'Steve'")
    c.execute("employee.address = new EmployeeAddress()")
    c.execute("employee.address.city = 'NY'")
    c.execute("employee.address.zipcode = '10001'")
    
    c.execute("insert into employees values (1, employee)")
    
  • create and fill object in dictionary and then insert into table:

    employeeMap = { \
          '@type' : 'Employee', \
          'name' : 'Mike', \
          'address' : { \
                '@type' : 'EmployeeAddress', \
                'city' : 'NY', \
                'zipcode' : '10005' \
           } \
    }
    
    c.execute("insert into employees values (2, ?)", [employeeMap])
    
  • import semantic type, create and fill object on the client and then insert into table:

    db.import_semantic_type('Employee')
    employee = db.get_type_factory().instantiateType('Employee');
    employee.name = 'Bill'
    employee.address = db.get_type_factory().instantiateType('EmployeeAddress');
    employee.address.city = 'LA'
    employee.address.state = '90001'
    
    c.execute("insert into employees values (3, ?)", (employee,))
    
Select query returns complex object as:
  • dictionary - if corresponding semantic type not imported
  • deserialized object - if corresponding semantic type is imported

We have already imported Employee semantic type on previous step, so it will be returned as deserialized object:

c.execute("select * from employees")
>> row = c.fetchone()
>> row[0]
1
>> row[1].name
u'Steve'
>> row[1].address.city
u'NY'

>> row = c.fetchone()
>> row[0]
2
>> row[1].name
u'Mike'
>> row[1].address.city
u'NY'

Operations with LOBs

Connect to dataspace and create cursor:

from stpy import ds
db=ds.connect(url="http://localhost:8888", dataspace="tspace.pythonds",
username="Admin", password="Admin", timeout=30, autocommit=True)
c=db.cursor()

Create table with LOB columns:

c.execute("create persistent table LobsTable(id int identity, b Blob, c Clob)")

Insert lob data:

c.execute("insert into LobsTable values(1, ?, ?)", [bytearray([1,2,3, 4, 5]), 'qwerty'])
c.execute("insert into LobsTable values(2, ?, ?)", [bytearray([10,11,12,13,14,15,16,17,18,19,20]), 'qwerty12345678'])

LOB column values are returned as objects with interfaces Clob or Blob, depending of lob type.

By default full lob data are downloaded in first fetch. This can be changed with method db.set_downloadable_lob_size(size) If lob size is greater than downloadable lob size then lob proxy will be returned and lob data will be downloaded by request.

Set downloadable size to 7:

db.set_downloadable_lob_size(7)

Select lobs:

c.execute("select * from LobsTable")
>>> row = c.fetchone()
>>> row
[1, <stpy.fabric.lobs.BlobImpl object at 0x104351490>, <stpy.fabric.lobs.RowSetClobProxy object at 0x1043514d0>]

>>> row[1].length()
5
>>> row[1].get_all_bytes()
bytearray(b'\x01\x02\x03\x04\x05')
>>> row[1].get_bytes(1,2)
bytearray(b'\x02\x03')

>>> row[2].length()
6
>>> row[2].get_all_string()
u'qwerty'
>>> row[2].get_sub_string(1,2)
u'we'

>>> row = c.fetchone()
>>> row
[2, <stpy.fabric.lobs.RowSetBlobProxy object at 0x104351590>, <stpy.fabric.lobs.RowSetClobProxy object at 0x1043516d0>]

>>> row[1].length()
11
>>> row[1].get_all_bytes()
bytearray(b'\n\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14')
>>> row[1].get_bytes(1,2)
bytearray(b'\x0b\x0c')
>>> row[2].length()
14
>>> row[2].get_all_string()
u'qwerty12345678'
>>> row[2].get_sub_string(1,2)
u'we'

Date and Time operations

Then connect to dataspace and create cursor:

from stpy import ds
db=ds.connect(url="http://localhost:8888", dataspace="tspace.pythonds",
username="Admin", password="Admin", timeout=30, autocommit=True)
c=db.cursor()
There are 3 date/time types in dataspace:
  • SqlDate - only date
  • SqlTime - only time
  • SqlTimestamp - date and time

Create table with date and time columns:

c.execute("create persistent table DateTimeTable(id int identity, sDate SqlDate, sTime SqlTime, sTimestamp SqlTimestamp)")

Date and time values can be inserted in different ways:

# 1. using string representation
#    in this case timestamp should be in timezone of dataspace
c.execute("insert into DateTimeTable values(1, '2017-02-28', '10:11:12', '2017-02-28 10:11:12.123')")

# 2. using date, time python objects
#    in this case timezone can be in any timezone, by default UTC

from datetime import datetime, date, time
# timestamp in UTC timezone
c.execute("insert into DateTimeTable values(2, ?, ?, ?)", [date(2017, 2, 28), time(13, 17, 20), datetime(2017, 2, 28, 12, 17, 20)])

# timestamp in local timezone
import pytz # $ pip install pytz
from tzlocal import get_localzone # $ pip install tzlocal
sTimestampLocal = get_localzone().localize(datetime(2017, 2, 28, 12, 17, 20))
c.execute("insert into DateTimeTable values(3, ?, ?, ?)", [date(2017, 2, 28), time(13, 17, 20), sTimestampLocal])

Select data:

c.execute("select * from DateTimeTable")
>> row = c.fetchone()
>> row[0]
1
>>> row[1]
datetime.date(2017, 2, 28)
>>> row[2]
datetime.time(10, 11, 12)
>>> row[3]  # inserted value in dataspace timezone converted to UTC
datetime.datetime(2017, 2, 28, 7, 11, 12, 123000, tzinfo=<UTC>)

>> row[0]
2
>>> row[1]
datetime.date(2017, 2, 28)
>>> row[2]
datetime.time(13, 17, 20)
>>> row[3]  # result in UTC, the same as was passed to insert
datetime.datetime(2017, 2, 28, 12, 17, 20, tzinfo=<UTC>)

>>> row = c.fetchone()
>>> row[0]
3
>>> row[1]
datetime.date(2017, 2, 28)
>>> row[2]
datetime.time(13, 17, 20)
>>> row[3]   # result in UTC, value passed to insert converted to UTC timezone
datetime.datetime(2017, 2, 28, 9, 17, 20, tzinfo=<UTC>)