[Genivi-ipc] [capic] Supporting multiple backends (was RE: [ANN] First public release of CAPIC PoC)

Konopelko, Pavel (P.) pkonopel at visteon.com
Thu Jun 2 09:19:31 EDT 2016


Andreas.Warnke at elektrobit.com wrote on 2016-05-31:
> I guess we once had the idea, that a "Calculator" service can be
> provided via 2 different backends at the same time.
> This goal is quite interesting for me, since we currently think on
> implementing an CAPIC interface in a project.
> Below, I'd like to point out a challenge we may need to solve for this.
> Andreas
> When looking at the example code at
> http://git.projects.genivi.org/?p=common-api/c-
> poc.git;a=blob;f=ref/simple/src/simpleclient.c;h=c53db4b64dfe4d138fdfe09
> 3383d2a059053645f;hb=refs/heads/master
> I see a client that reads a value of a service by
> result = cc_Calculator_split(instance2, value, &whole, &fraction);
> If now the same command with other parameters
> result = cc_Calculator_split(instance3, value, &whole, &fraction);
> shall use a different backend - maybe a proprietary one - than we need
> some kind of indirection.
> Indirection in C is possible either via #DEFINE Makros (which
> contradicts MISRA-C) or via function pointers.
> Do you see a third solution? Do you already have ideas how to solve this?

Short answer is that I am currently leaning towards function pointers.

Longer answer is that I would prioritize keeping the "user" interface as simple and intuitive as possible and (2) independent of any backend.  Additionally, the "user" application should compile independently of any backend-specific code (IOW, the backend selection should be possible at runtime).

Given the above, my current approach would be roughly this:

* have backend-specific instances of vtable for each interface;
* have a reference to a proper vtable stored in the interface instance structure when the instance is created/initialized;
* have a backend-independent implementation of interface methods that only forward the call through the vtable to a proper backend (this can be inlined to remove the overhead, but the client should have a clean API hiding all implementation details--such as vtable).

I would actually try to implement the above to flesh out the details.  This obviously would require adding a second backend and a good question is which one.  Here are a couple of ideas:

* support SOME/IP backend available with CAPIC++ (this would require some sort of adapter);
* support a backend for in-process communication (e.g., based on 0mq or nanomsg);
* support ipcq-based backend;
* support a backend for VSI (which is currently under active development).

All of them have their pros and cons.  If somebody else is interested in helping out with implementing a particular one, this would simplify the choice.

--Pavel Konopelko

> -----Original Message-----
> From: Konopelko, Pavel (P.) [mailto:pkonopel at visteon.com]
> Sent: Friday, August 28, 2015 11:44 AM
> To: Warnke, Andreas; genivi-ipc at lists.genivi.org
> Cc: Thelin, Johan (Pelagicore)
> Subject: RE: [ANN] First public release of CAPIC PoC
> Andreas,
> Andreas.Warnke at elektrobit.com wrote on 2015-08-27:
>> Some comments/questions on CommonAPI-C:
> Thanks for your interest.  Just to make it clear before responding to
> the below.  The PoC code published so far is just that--proof of
> concept.  The architecture and design were developed only so far that
> the use cases required by the reference examples are supported.  I did
> not spend any significant effort on up front design in anticipation of
> other requirements that are partially published on the Wiki.  Once there
> is sufficient understanding of the problem domain (and more use cases
> implemented), it is likely that some of the current design decisions
> will be revisited.
>> 1) It is good to separate the "interface instance" from the "event
>> context" object
> Conceptually, there are currently three entities supported by CAPIC:
> interface instances, event context and backend.  They are distinct
> objects, but are related to each other.  Backend is currently a
> singleton embedded into the backend.c module and providing D-Bus
> support.  Backend owns one event context object.  Interface instances
> (including the to-be generated code in ref/*/src-gen/) are created by
> the backend and keep a reference to it.
>> 2) Do I see it right, that you support only 1 backend?
> Yes, currently backend is a singleton.  Once I get to introducing a
> second backend, this would help to define a proper transport-independent
> backend interface (an abstraction) and to refactor the rest accordingly.
> My personal plan is to add support for in-process messaging, but if
> somebody steps in to help me with that any other transport would do for
> the purpose of driving the CAPIC architecture and design.
>> Does this mean
>> you have to decide either to use the user-private desktop channel or
>> the system-dbus channel?
> The PoC currently works only with D-Bus (via sd-bus) as the backend
> transport.  Supporting additional transports via backends that can be
> pluggable at run-time is the vague long-term plan.  Given a specific
> transport to be added, the design and implementation can be worked out.
>> 3) Is it right that one backend has exactly one event context?
> Yes, this is the current status.  If there are use cases where a
> particular backend needs to support multiple event contexts per process,
> please let me know.  As with many other things, this area of the big
> picture is still vague.  The current PoC supports one backend with one
> event context per process (i.e., client or server application).  Given
> specific use cases, this can be further refined.
>> 4) It is good to have a file-descriptor attached to an event context.
>> This allows to wait for events using select or poll - or to register
>> at the glib event loop for callbacks. (Did I get this right?)
> Yes, this is how it works in the PoC.  The API for "exporting" the
> internals of an event context is largely based on similar concepts from
> GLib [1] and sd-event [2].  This is also in line with one of the CAPIC
> maxima to leave the control over certain policies with the applications.
> Dealing with concurrency (event loops, threads, etc.) is one of such
> areas.  Memory management is another one to be tackled in the next steps.
>> 5) Concerning the asynchronous calls by clients: How much control do
>> you have on which thread will execute the callback? Is this always the
>> main- event-loop thread?
> In the current PoC, all reference examples (ref/) are designed around
> one main loop that attaches the event context owned by the backend.
> Consequently, all processing is serialized by the main process thread.
> Long term, I lean towards keeping the main loop based design for the
> backends and other CAPIC infrastructure.  For the applications that need
> to deploy their client and server interface instances to multiple
> threads, some additional support by CAPIC might be needed.  My
> preference is to avoid making CAPIC fully thread safe (by adding locks,
> etc.), but rather to provide some minimal guarantees that would allow
> applications, for example, to create and run multiple backend instances
> in separate threads.  All instances tied to a particular backend would
> then live in one thread.  All additional synchronization would be left
> to the application.
> Regards,
> --Pavel Konopelko
> [1] https://developer.gnome.org/glib/stable/glib-The-Main-Event-
> Loop.html#GSourceFuncs [2]
> http://www.freedesktop.org/software/systemd/man/sd_event_prepare.html
> genivi-ipc-bounces at lists.genivi.org wrote on 2015-08-26:
>> Hello everybody,
>> The Proof of Concept for Common API C just hit its first milestone.
>> The version v0.1.0 has been released and can be fetched from the
>> GENIVI git [1].  While it currently does not offer much of consumable
>> functionality (actually, it does not even have a generator yet), it
>> will [hopefully] facilitate the discussion about the design and
>> implementation of Common API C.
>> Any questions and feedback are more than welcome.  I plan to have a
>> discussion about the current status and the next steps during the
>> upcoming SI EG F2F in September.  Read out and working sessions are
>> also planned for the AMM in Seoul.
>> capic 0.1.0
>> -----------
>> * This is the first public release of Common API C.  The project is in
>>   the early stages and does not yet provide much of consumable
>>   functionality.  The current goal is to flesh out the big picture of
>>   how the functionality should be split and what are the required
>>   interfaces. * Reference examples including 'simple', 'game' and
>>   'smartie' are provided to drive the development and to illustrate the
>>   usage.
>> * Supported Franca features include:
>>   - synchronous and asynchronous method invocation (including
>>     fireAndForget); - primitive data types of fixed size (i.e.,
>>     Boolean, IntNN, UIntNN, Float, and Double).
>> * Supported backends include only D-Bus (via libsystemd's sd-bus).
>> * Both client and server applications are capable of managing multiple
>>   interface instances.  Additionally, server applications can provide
>>   different implementation for instances of a particular interface.
>>   Working with multiple instances and their implementations is
>>   illustrated by the 'simple' reference example. * Backends use an
>>   event loop to manage asynchronicity.  Backends provide an interface
>>   to embed their event loop into the application's event loop.  Event
>>   loop embedding is illustrated by the 'game' reference example.
>> Regards,
>> --Pavel Konopelko
>> [1] http://git.projects.genivi.org/?p=common-api/c-poc.git;a=summary
>> _______________________________________________ genivi-ipc mailing
>> list genivi-ipc at lists.genivi.org
>> https://lists.genivi.org/mailman/listinfo/genivi-ipc
>> ----------------------------------------------------------------
>> Please
>> note: This e-mail may contain confidential information intended solely
>> for the addressee. If you have received this e-mail in error, please
>> do not disclose it to anyone, notify the sender promptly, and delete
>> the message from your system. Thank you.

More information about the genivi-ipc mailing list