1. Has anyone already prepared FOSS software selection process/criteria/guidelines/checklists? (Paul Sherwood)

Streif, Rudolf rstreif at jaguarlandrover.com
Mon Mar 7 14:10:13 EST 2016

My $0.02

> Quick intro here; I'm going to send along what I have but I wanted to try
> and address this email a bit before I send my document.

Looking forward to it. It may be a good starting point to build this out
into a GENIVI assessment toolkit for FOSS. It would probably also make
sense to separate classification criteria into general sections that apply
to all industries/uses and sections that are domain specific, for instance,
the automotive industry, if there are any.

> 1 On a scale of 1-10, what is the code quality? (Say 1 to be grad student,
>> final exam paper)
> This is going to be subjective, no? Don't we need more concrete criteria?
Definitely subjective to some extend, but worthwhile to establish some
criteria, to assess various things. Many papers and books have been
published on software metrics. While it's not the sorcerer's stone it still
can provide some good insight, in particular if the metrics can be
collected automatically. Just some rough ideas (don't kill me over it):

   - LOC/SLOC: while it does not tell much, it can be at least a subjective
   indicator for complexity, dependent on what problem the code tries to solve.
   - Comments to code ratio: while easy to filter it is of course hard to
   assess the quality of the comments
   - Coupling: meaning how much do code modules depend on the internals of
   other code modules rather than on cleanly designed interfaces, not easy to
   - Average length of functions: can help assess complexity
   - API Complexity: how many APIs are there, how many parameters do they
   take, can help assess ease-of-use
   - Test Suite: does the code provide a test suite for unit and API tests?
   - Coding Conventions: does the project have coding conventions and are
   they adhered to
   - Static Code Analysis: for C programs are there lint scripts as part of
   the make process, other programming languages may use other tools
   - API documentation: are there Doxygen, JavaDoc etc. headers for
   classes, functions etc.
   - Cross Building: does code and build system support cross building? (I
   know, Paul Sherwood, but cross-building won't go away anytime soon :)

> 2 What frameworks does the solution build upon?
> Interesting, what frameworks are you thinking of here? Qt vs. GNOME?

I would generalize this into "dependencies". A project may itself be mature
and do all the good things but it may rely on dependencies that do not live
up to the quality standards (whatever they  may be, tbd). As a chain is
only as strong as its weakest link, it could mean trouble down the road.

>> As there are lots of good ideas built on improper, insufficient or just
>> experimental frameworks.
>> 3 Are the code maintainers open to contributions and collaborations?
> This I think we can measure and should be a key component. A couple ideas
> I have are measuring the number of patches, how quickly the patches are
> turned into commits, signed-off by lines (indicating maintainer agreement)
> and number of merges from branches into master, etc.
>> I agree. It may also be worthwhile looking at other meta data such as
conversations on project mailing lists. This is admittedly very subjective

> Regards
>> Roland
>>> Hi all,
>>> I've been asked to assist with assessment of the applicability of
>>> potential FOSS projects for an automotive customer, and am wondering if
>>> there is anything already written, either for GENIVI or AGL that could
>>> help guide me. Or maybe this has been well covered in other communities?
>>> I've done a quick trawl of various wikis but can't find anything
>>> obvious.
>>> I've prepared a list of questions as follows, which I'd be happy to
>>> compare/contrast/contribute to any existing start point:
>>> - What is the preferred/recommended solution?
>> - Why is this chosen or preferred?
>>> - What are the alternative or competing solutions?
> These three questions above are part of a process I'd call
> "identification" of a relevant software component. This requires domain
> expertise. My example would be, if you're looking to have specific
> functionality, let's say MOST or EAVB, on your system, you need to identify
> the right kernel version that contains the functionality you're looking
> for. Since the functionality relies often on specific protocols and
> standards, someone familiar with those standards and protocols needs to be
> involved in the identification of the relevant kernel to ensure that you
> get the kernel you need. That domain expertise is not required for the
> measuring the software maturity or quality of the given component, so I
> assume software component identification is already done and I focus on the
> process of measuring maturity after candidate components are identified.

That is why I suggest separating things into the more generic criteria and
domain-specific criteria.

>> - How big and how active is the community?
>>> - How big and how active is the upstream team?
>>> - What is their release cadence/approach?
> I take these issues into account in my modified open source maturity model
> approach.

That of course is a double-edged sword. A lot of patches for example could
mean an active community or code with a lot of bugs.

>> - How much code is this?
> Tricky.

Are you referring to LOC/SLOC? Easy to collect but derived value on its own
is rather limited.

>> - How much effort to maintain?
> Are we getting into the domain of software complexity here? Would the
> usual measurements of software complexity apply?

I would think so. See above.

>> - Who are the upstream engineers?
>>> - Who employs them?
>>> - Where does the funding come from?
>>> - Is the funding secure and sustainable?
> Also addressed, at least somewhat, in my doc.
>> - Where is the source?
>>> - What are their key technology choices?
>>> - What are their dependencies?
>>> - Any weaknesses/bad choices?
> I use "anti-pattern" instead of weakness or bad choice.
>> - What licenses are applied?
>>> - Any political factors?
> I can unambiguously determine license. But one person's politics is
> another person's process. Can we measure that or only document it?

What do you actually mean by "political factors"?

>> - What is the expected roadmap?
>>> - How close is this to our needs?
>>> - How flexible is the project?
>>> - Can we contribute to upstream?
>>> - Can we hire upstream?
> I don't touch on hiring because I think it might make some companies
> reluctant to let their resources work on open source projects, but would
> welcome discussion around this.

*Rudolf J Streif*
System Architect - Open Source Initiative
Open Source Technology Centre

*M:* +1.619.631.5383
*Email:*  rstreif at jaguarlandrover.com

UK: G/26/2 G02 Building 523, Engineering Centre, Gaydon, Warwick, CV35 ORR
US: 1419 NW 14th Ave, Portland, OR 97209
jaguar.com | landrover.com
Business Details:
Jaguar Land Rover Limited
Registered Office: Abbey Road, Whitley, Coventry CV3 4LF
Registered in England No: 1672070

This e-mail and any attachments contain confidential information for a
specific individual and purpose.  The information is private and privileged
and intended solely for the use of the individual to whom it is addressed.
If you are not the intended recipient, please e-mail us immediately.  We
apologise for any inconvenience caused but you are hereby notified that any
disclosure, copying or distribution or the taking of any action in reliance
on the information contained herein is strictly prohibited.

This e-mail does not constitute an order for goods or services unless
accompanied by an official purchase order.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.genivi.org/pipermail/genivi-projects_lists.genivi.org/attachments/20160307/0e7979d9/attachment.html>

More information about the genivi-projects mailing list