ITS - Intelligent Transportation Systems Report ITS Home Page

4 VII POC TECHNICAL OVERVIEW

4.1 POC System Architecture Description

The POC system includes mobile terminals that were typically installed in vehicles. In the POC, these units are known as On-Board Equipment (OBE). OBEs exchange messages with each other for Vehicle-to-Vehicle (V2V) applications and with the stationary roadside terminals known as Road Side Equipment (RSE) for Vehicle to Infrastructure (V2I) applications. The link between OBEs and between OBEs and RSEs is the Dedicated Short Range Communications (DSRC) Radio system. The RSEs are connected to, and are remotely managed from a Service Delivery Node (SDN) and an Enterprise Network Operations Center (ENOC). The SDN provides a variety of services that are described in more detail in subsequent sections.

A critical aspect of the VII architecture is the management of scale. The system needs to be designed to support 100% vehicle deployment, which translates to just over 200 million vehicles. In operation, this means that applications such as PDC may be handling tens of millions of messages per second, across the entire network. The system must allow a single user to post, for example, a warning sign in the vicinity of a particular hazard. To manage these large-scale extremes, the system uses a tiered tree-like architecture (See Figure 4-1). As can be seen in the figure, any given RSE needs to be capable of interacting with up to 250 vehicles at any given time. This is determined primarily by the number of vehicles that can fit inside a typical Radio Frequency (RF) footprint, which provides a range of approximately 250 m.

The uppermost level is the Network User. This feeds downward to Service Delivery Node on the left and then across to Backbone and Service Delivery Node on the right on level two. Also on the upper level, Service Delivery Node feeds downward to Backbone . Service Delivery Node below Network User feeds into three boxes labeled Roadside Equipment, and via DSRC into Mobile Terminals. Service Delivery Node on the other side of Backbone feeds downward to Roadside Equipment.

Figure 4-1 Overall System Structure

Each RSE is connected to a regional SDN via a backhaul link, and each SDN is connected to all other SDNs via a wide band backbone network. Using this architecture, any RSE is accessible from any SDN, and this is a key feature of the scalability of the system, since any user connecting to the local SDN can interact with any RSE.

A typical SDN is expected to support between 1000 to 2000 RSEs, so, for a nationwide deployment there might be between 100 and 200 SDNs. The SDN provides a variety of services, but key to the discussion of scaling are the Advisory Message Delivery Service (AMDS), the Probe Data Collection Service (PDCS) and the Network User Gateway (NUG).

The AMDS serves as a link between network users who have advisory messages to distribute and the entire installed base of RSEs (and therefore the entire OBE population). In one possible deployment scenario, it is expected that the number of signage providers will be less than about 10,000, so for signage, the system allows 10,000 providers to efficiently interact with 200,000 RSEs and deliver signage messages to 200 million OBEs. In the reverse direction, the PDC collects probe data from all of the RSEs attached to the SDN. This data is parsed into data “topics,” and then data for any given topic is distributed to network users who have subscribed to that particular topic. A typical topic might be instantaneous road speed at a particular location on a particular road. This scheme allows the system to collect vast amounts of data from vehicles on the road, and sub-divide the data passing only those parts of interest to any given subscriber. It is expected that there may be about 10,000 to 100,000 probe data subscribers, and as with AMDS, the system effectively scales from over 200 million vehicles (generating roughly 50 Gbps) to about 50,000 users of this data. The SDN also provides a simple routing system that links vehicle users with private service providers in either one-to-one or one-to-many relationships. In this role, the SDN is effectively like a mobile Internet system linking users to web service centers such as navigation providers.

The POC implementation of the system includes 55 RSEs placed at various locations in the northwestern Detroit suburbs. These RSEs are linked to two different SDNs using a variety of different backhaul technologies. One SDN is located in Novi, Michigan, and the other is located in Herndon, Virginia. The Herndon facility also includes an ENOC and the CA required to support security functions. The POC implementation is thus a minimalist version of the national system architecture allowing the program to assess the operational behavior of the system as if it were a full-scaled deployment.

4.2 Concept of Operations

Conceptually, the system provides several core functions from which a suite of applications may be created. The POC system:

As part of the overall VII program, a set of approximately 100 use cases or applications were developed by various stakeholder groups. In general, these descriptions did not fully articulate the use cases in the context of the system, but they did provide insight into the needs and priorities of the various stakeholders. From this initial set, 20 use cases were expected to be available at the system's initial deployment and were identified and articulated in more detail. This group is known as the “Day-1 Use Cases.”

Because developing and testing all 20 Day-1 Use Cases would have been impracticable, the POC program identified a subset of use cases that exercised the core functions described above. These were then implemented and tested in ways to assess both the functionality of the system and the baseline performance, under the assumption that the system would provide these core functions in the same way regardless of the specific details of the application.

In several of the safety applications, the use cases were scaled back to allow the assessment of key architectural and system aspects without requiring development of a full-blown application.

This report focuses on the VIIC's developed and tested POC applications described in the following sections.

4.3 Dedicated Short Range Communications

The 75 MHz band in the 5.9 GHz frequency range allocated by the FCC offers significant data transfer capacity. However, to make use of this spectrum in a mobile environment required development of new communications protocols. The core radio protocol used is based on the well-known IEEE 802.11a/b/g wireless Ethernet standard, often referred to as WiFi. Because of the unique mobile environment, the IEEE 802.11a standard was modified to allow what is known as an “association-less” protocol, identified as IEEE 802.11p. This means that the system does not establish a conventional network with all of the mobile terminals as nodes, all of which know about each other. The reason this is not done is that the mobile terminals (OBEs in the POC) are entering and leaving the hot spot rapidly, and there is insufficient time available to set up a new network identity for each new arrival, and inform all other nodes in the network before the network changes again because a terminal has left the footprint of the RSE, or a new one has arrived. On the surface, this approach may seem to limit the functionality of the system since it means that any given mobile terminal cannot interact uniquely with another terminal (the way computers on an office network might), but this is not the case. Because the system is radio based, all terminals can hear all messages sent. Since, under most circumstances one can simply broadcast a message in the local area, and all terminals (OBEs and RSEs) can receive it, there is no need to establish a unique low-level network identity for each communicating device.

The higher levels of the protocol are defined in a suite of standards known as IEEE 1609 Wireless Access in Vehicular Environments (WAVE). This suite addresses security (IEEE P1609.2), networking and messaging (IEEE P1609.3) and channel management (IEEE P1609.4). In particular, IEEE P1609.3 defines a WAVE Short Message Protocol (WSMP) that allows a simple way for a terminal to send messages in the local vicinity of other terminals within local radio range. WSMP allows for direct message addressing based on the Medium Access Control (MAC) address of the intended recipient, but in practice most WAVE Short Messages (WSMs) are broadcast and therefore, are not addressed to any specific recipient.

The current DSRC standards divide up the 75 MHz spectrum into 10 MHz channels. This allows RSEs in local proximity of each other to provide services without causing interference. Also, because the physical layer protocols are based on IEEE 802.11a, the DSRC standard allows for use of existing commercial IEEE 802.11a radio components. Since it is critical for safety reasons to ensure that all terminals can hear each other, and the standards developers did not want to assume the use of multiple radio receiver systems (or very wide band receiver systems), a method for channel management was developed and described in IEEE P1609.3 and P1609.4. The approach separates terminal operations into two modes: “Provider” mode and “User” mode, and splits the use of channels into two time intervals (of 50 ms each). The Control Channel (CCH) interval and the Service Channel (SCH) interval are illustrated in Figure 4-2. All terminals are required to monitor the CCH during the CCH interval. In Provider mode, the terminal transmits a WAVE Service Advertisement (WSA) on the CCH during the CCH interval, and since all terminals are monitoring this channel at that time, they all receive the WSA. The WSA contains a list of the services that the provider (typically an RSE) will provide during the SCH interval along with the SCH channel number they will be using. The services are identified by a code number known as a Provider Service Identifier (PSID). If a terminal in User mode (typically an OBE) receives a WSA that contains a PSID of interest (for example, a message associated with an application that is active on that terminal), the terminal will switch to the appropriate SCH during the SCH interval, and make use of that service.

The uppermost level at the left side is a box labeled WAVE Service Advertisement, which is above a box labeled Safety Messages, which is above a long box labeled Control Channel Interval. Next to Safety Messages is a box labeled Service Messages, which feeds through Control Channel Interval to Service Channel Interval.

Figure 4-2 DSRC Channel Management Concept

Because all terminals are required to monitor the CCH during the CCH interval, all high priority safety messages are sent on the CCH during the CCH interval.

All low priority services and other services using Internet Protocol (IP) are restricted to use the SCH during the SCH interval. The result of this method is that all terminals have a high probability of receiving important messages, and less important message traffic is distributed across the other channels, thereby reducing congestion.

IP transactions typically require some form of network setup, and, as described previously, the DSRC protocol does not establish this. To support this type of traffic, the WSA also contains the IP address of the provider. In general, the standard does not describe the use of IP between OBEs because OBE-to-OBE messaging is safety-related and will use WSMP on the CCH. This avoids any issues with OBEs needing to route packets (although this sort of usage is not prohibited, it is just not defined in the standards). Once a user terminal has acquired the RSE IP address, it can then create its own IP address using Internet Protocol version 6 (IPv6) rules and can then send IP packets to remote service providers. These packets are routed by the RSE through the backhaul network to the SDN and from there, through the network gateway to the Internet (and then to the service provider).

While somewhat more complex than typical protocols, DSRC achieves the unusual feat of administering communications resources in real time to assure that critical safety messages will have top priority, also allowing lower priority messages, both local messages and messages bound for distant servers, to simultaneously use the system.

4.4 Security Subsystem

The VII Security subsystem is a complex set of functions and services that operate in parallel with the other elements of the system to ensure safe and verifiable system behavior and to prevent misuse of, and attacks on the system.

4.4.1 Security Subsystem Objectives

The VII Security subsystem is aimed at ensuring three basic objectives: privacy, authenticity and robustness. The basic structure of the Security system is also designed to provide assurance, relative to the confidentiality of private message traffic, the authenticity of public message traffic and the anonymity of private generators of public messages.

Privacy

Privacy is addressed in two ways in the VII Security subsystem. Fundamental to the system operation is the assurance of anonymity and confidentiality. While service providers outside the system may need to know the identity of a specific OBE, the VII system itself has no reason to know this information. The system has been specifically designed to avoid requiring any form of traceable or persistent identification of any OBE. In addition, when identifying information is passed through the system to trusted service providers, the system provides mechanisms to encrypt this information so that none of the system elements or operators can access it. Finally, these encryption schemes are also used to suppress the opportunity for observers to correlate operational information (e.g. vehicle speed information) with physical observation, and thus the system also protects against misuse by external attackers.

Authenticity

In any system, it is desirable to require users to prove their authorization to access and/or use the system's resources. The VII system is unique since, for the OBE, this authentication must be accomplished without violating the user's privacy. The VII Security system provides a sophisticated means for validating an OBE's legitimacy without identifying the OBE. This approach allows users to be assured that information provided by the system is legitimate and truthful, and it allows the system to prevent access to the system by users with no authorization, or OBEs that appear to have been tampered with.

Robustness

It is inevitable that the system will be attacked. These attacks may be full-scale sophisticated attempts to disrupt the system, or they may be small-scale pranks. In any case, the system must make it very difficult to mount an attack; it must be capable of identifying and terminating a severe attack in progress, and it must provide a means for rapid recovery of full capability following any actions to terminate the attack.

4.4.2 Security Constraints

The system must perform all of the functions summarized in Section 4.2 while subject to the following operational constraints:

Anonymity

Anonymity was discussed briefly as an element of privacy in Section 4.4.1. The system must perform all of its required functions without identifying the OBE and without disclosing any private information being passed through the system between trusted users and providers.

Inability to Track

The system is designed to assure anonymity and to protect private information while inside the system. However, this is not sufficient to assure that the system cannot be used for improper purposes. Since the system will be used by vehicle users as they move about geographically, it must be impossible to use anonymous and encrypted vehicle messages to track a vehicle from place to place. This means that the messages must not only be non-identifying, but they must also have limited and controllable relationships to each other, so that they cannot be linked together to form a trace of the movements of the vehicle. In other words, message transactions occurring at different geographic locations in the system must not contain any information that allows an observer (legitimate or not) to know the path of the vehicle they are associated beyond a certain distance.

Scalability

Since the deployed system will include hundreds of millions of vehicles, the security solutions used must be scalable to these large volumes without incurring excessive costs or performance degradations. The increases in hardware and processing should scale at worst, linearly and preferably sub-linearly with the number of users. Similarly, the impact of low levels of misbehavior should not result in increasing disruption of the system as the user population increases.

Lifecycle Management

The system must be manageable without imposing any special or unusual service requirements over this vehicle life span. Security updates and re-authorizations should, under normal conditions, occur transparently to ordinary users, or at worst, occur concurrently with other service and maintenance activities.

4.4.3 POC Security Architecture

The VII Security system is based on the well-known asymmetric cryptography system. This approach uses pairs of public and private keys to encrypt and decrypt information. The keys are mathematically designed so that each key will decrypt what the other key encrypts (a so-called asymmetric key pair). These pairs typically are separated into a public key (one made generally available) and a private key (one kept secret). In many cases, the communicating parties do not have an established trusted relationship. As a result, the parties need to send their public keys to each other. To assure the authenticity of these keys, the key is digitally signed using the key of a well-known CA (that presumably both parties know and trust). Digital signing is a process whereby a checksum (called a “digest” or “hash”) of the signed document (in this case, the key) is encrypted using the signer's private key. The signer's public key is typically sent with the certificate, but it is also verifiable by checking with the CA. Using the known public key, the receiver can check that the received signature “decrypts” giving the checksum of the received message. This “verification” process assures the receiver that the sender of the message is certified by the CA and that the message was unchanged and sent by the claimed sender. In many cases, the recipient is already in possession of the CA's public key.

Conventional public key systems use very large keys. Since all messages in the VII system will eventually be transmitted over the limited bandwidth radio link, the VII Security system uses keys based on Elliptic Curve Cryptography (ECC). These keys are typically about 1/4 the size of conventional keys. The resulting certificates are about 1/8 the size of conventional certificates as defined in the X.509 standard (the certificates used in Internet security). The certificates for VII vehicle security are defined in the IEEE P1609.2 standard.

The various security operations are shown in Figure 4-3. This figure illustrates the main security relationships between the various elements of the system that use the IEEE 1609.2 standard. This includes signing and verification of WSAs, WSMPs and IP traffic between OBE applications, as well as certificate management functions and methods to secure probe data and network transactions. The POC project included development and test of all elements of this subsystem except the VII Host Identity Protocol (V-HIP) function.

The Security Services in the OBE are described in Section 4.5.3.4, and the CA structure for the VII system is described in Section 4.11. Volume 2b, Infrastructure System Technical Description, provides an overview of the network user authorization and other security mechanisms used to secure the various SDN, RSE and ENOC transactions.

Complex chart with seven boxes across the uppermost level, which feed down and across at various levels below. The boxes are (1) OBE Application, (2) OBE Security Libraries, (3) RSE Security Libraries, (4) Service Delivery Node/ENOC, (5) Transaction Service Manager, (6) Network User, and (7) Certificate Authority. The boxes below OBE Application are Security APIs (optional) and WAVE Security Context API. These feed or are fed by routines for network user management, identification, and authorization, various signing and identification processes, encryption and decryption, and OBE and RSE certificate management.

Figure 4-3 Security Subsystem Transactions

4.5 On-Board Equipment Description

The OBE is a self-contained computing system that supports a wide variety of applications and services. It is typically intended to be used in a vehicle, although it is also capable of bench-top operation. It was not intended to be a deployable platform but as a test platform for use in the POC.

The OBE computing platform hardware is the central piece of hardware responsible for vehicle interactions within the VII network. The hardware supports communications with other VII components, exchanges data with Original Equipment Manufacturer (OEM) vehicle systems through a Controller Area Network (CAN) interface, and accommodates driver interaction through a HMI. In addition to providing the hardware implementation of VII OBE interfaces for the POC, the OBE computing platform hardware also provides daughter card slots and assorted local interfaces which provide feature, control and test flexibility during POC. Figure 4-4 provides a diagram showing the OBE computing platform hardware within the context of the POC vehicle and related VII components.

Diagram with three main blocks aligned vertically, and items feeding into the lowermost block from either side. At the top, POC Applications rests on POC OBE Software Services, which rests upon POC Computing Platform. On the left, Interface to Vehicle Systems, Interface to Vehicle HMI, and Test, Monitoring, and Control Interface lead into POC Computing Platform. On the right, GPS Interface and DSRC Interface to Other Vehicles and RSEs lead into the POC Computing Platform.

Figure 4-4 OBE Subsystem Interface Diagram

The OBE subsystem is shown in Figure 4-5. This subsystem is based on an Intel processor based computer (OBE processing unit) running the Linux Operating System (OS), and configured with a variety of software services, as described later in this section. In support of the processing unit, the OBE subsystem also includes a touch screen display device, an external combined Global Positioning System (GPS) and DSRC antenna, a programmable power management system and an external positioning unit.

Two blocks at the top are Vehicle Passenger Compartment and Vehicle Exterior are connected to a larger box with vehicle trunk subsystem components. The block for Vehicle Passenger Compartment includes Vehicle Ignition Switch, Cockpit Ethernet (laptop), Touchscreen Display and Audio, and Vehicle CAN Bus, which connect to the box below via 12 Volt power supplies, Ethernet, VGA and USB, and CAN. The block for Vehicle Exterior includes Integrated Antenna Assembly, and connects to the  box below via DSRC and GPS. The subsystem components in the lowermost box include power switch, supply and distribution, Ethernet hub to trunk Ethernet, OBE processing unit to trunk test ports, Positioning Unit and Power Splitter. Th e Positioning Unit connects to the Vehicle Backup Light.

Figure 4-5 OBE Subsystem Block Diagram

4.5.1 OBE Processing Unit

To minimize design and development time, the hardware computing platform for the OBE was selected from a range of off the shelf ruggedized computers designed for mobile application. A commercial WiFi radio was added to provide the physical layer of the DSRC Radio and a hardware accelerator was added to augment the processing speed required for security functions. A Linux OS was selected to match the various system requirements.

4.5.1.1 EuroTech DuraCOR

The primary processing functions for the OBE are performed by the EuroTech DuraCOR unit. This is a rugged self-contained, convection cooled (fan-less), embedded computer-based on a standard x86 architecture.

The DuraCOR unit was selected from among dozens of candidate units based on its unique combination of size, packaging, processing capability and the number and type of I/O interfaces. EuroTech has supplied transit and rail onboard computing modules in Europe and had some experience in the U.S. market as well. The DuraCOR uses an Intel Celeron 400 MHz processor with 256 MB Synchronous Dynamic Random Access Memory (SDRAM). Program memory is provided by a 2 GB solid-state disk for reliability and robustness to the environmental conditions expected in the vehicle environment. While this unit is modest by Personal Computer (PC) standards, it is substantially more capable than current automotive embedded production systems which operate at lower speeds and support less memory.

One of the key factors in selecting the DuraCor was the availability of numerous I/O options. The DuraCOR has eight serial ports and two CAN interfaces, as well as four Universal Serial Buses (USBs) (2x USB2.0 and 2x USB1.1) interfaces, a Local Interconnect Network (LIN) I/O, and 10/100 BaseT Ethernet ports. In addition, the usual Video Graphics Array (VGA), audio, and keyboard/mouse interfaces are provided.

The DuraCor also supports two Mini-PCI expansion ports. One port is used for the DSRC Radio card and the other supports the High Performance Security Accelerating Module (HPSAM) security processor/accelerator. An internal Wide Area Augmentation System (WAAS)-enabled GPS device is included, and this is used to provide backup positioning as well as the precise Pulse Per Second timing used for channel synchronization (See Section 4.3).

The module accepts 12/24V supply and specifies an ambient operational temperature range of -20C to +55C (the unit was tested successfully to +65C). Vibration meets the EN 50155 category 1 class B, and Electromagnetic Capability (EMC), supported by (European Standards) EN 50155 and Economic Commission for Europe/ Organisation des Nations Unies (ECE)/(ONU) regulation No. 10/2 as well as the essential constraints defined in EN 60690. The basic unit is shown in Figure 4-6.

A photograph shows a black box with connector ports. Photograph provided by Eurotech, Inc.

Figure 4-6 DuraCOR Processing Unit

The internal architecture, showing the motherboard and the supporting expansion cards are shown in Figures 4-7 and 4-8.

A diagram indicates location of components behind the maintenance connectors and protective cover. These include CPU with structural heat dissipation, Mass Storage, Expansion Slot, a carrier with PSU, WIFI, monitor filters and protection, and two expansion slots. The rear side has labels designating Field Input/Output. Photograph provided by Eurotech, Inc.

Figure 4-7 DuraCOR Unit Physical Architecture


Photograph of a circuit board with labels indicating location of inputs, power supply, slots, input/output ports, status LEDs, and monitor. Photograph provided by Eurotech, Inc.

Figure 4-8 DuraCOR Unit Motherboard

4.5.1.2 Wind River Linux Operating System

The operating system used on the EuroTech DuraCOR unit is the Wind River distribution of the Linux OS. Wind River's Linux platform is a fully tested and validated distribution based on Linux 2.6 kernel technology.

All components of the platform, including the kernel, integrated patches and packages, and supported hardware architectures and boards, have been exhaustively tested and validated by Wind River. Some of the key benefits achieved by the OBE team, from standardizing on Wind River's Linux platform, include:

The Wind River Linux platform licensed for the OBE team also included Wind River's Workbench Development Suite. This Eclipse-based suite offered full capability across the development process in a single integrated environment. This suite came complete with integrated tools for debugging, code analysis and testing.

Wind River's Linux distribution and the associated bootloader and Board Support Package (BSP) for the DuraCOR hardware was made available to the OBE team members.

4.5.1.3 DSRC Radio Implementation

As described in Section 4.3, the DSRC/WAVE system is based on IEEE 802.11p and IEEE 1609 standards. The OBE DSRC Radio is implemented as a hybrid hardware and software system as illustrated in Figure 4-9.

The physical layer and the supporting IEEE 802.11p protocols are implemented using a commercial WiFi radio packaged on a Mini-PCI card. This card contains firmware that was modified to conform to the IEEE 802.11p standard. The upper layers of the DSRC protocol defined in IEEE 1609, known as WAVE protocols, are implemented in software running within the OBE software system.

The upper and lower layers of the radio subsystem are managed by a software element known as the WAVE Management Entity (WME). This forms what is known as the “management plane” of the radio, while the layers that operate on the messages themselves are called the “data plane.”

Three blocks in horizontal arrangement indicate system components. The left block is Security Services. The middle block is WAVE Management Entity. The right block is a cluster of elements, including WAVE Short Message Protocol, IP Stack (Linux), WAVE Upper Layer (networking and service control), WAVE Upper Medium Access Control Layer (channel coordination), DSRC Lower Medium Access Control Layer (MAC, CSMA), and DSRC Physical Layer.

Figure 4-9 DSRC/WAVE Radio POC Architecture

DSRC Layers

The DSRC Radio physical layer and lower Medium Access Control (MAC) layer are responsible for physically generating and receiving the RF signals and for controlling the basic operations associated with sending and receiving these signals. The requirements for this operation are specified in the IEEE 802.11p standard which defines DSRC Radio.

The physical and Lower MAC layer are implemented using an Atheros Radio subsystem implemented on a Mini-PCI card. The base radio card is designed to support the IEEE 802.11a WiFi standard that operates at a slightly lower frequency band, and operates using slightly different protocols. The basic IEEE 802.11a operation has not changed, but key elements have been added to allow the system to operate effectively in the high speed vehicle environment where it is not possible (or necessary) to set up a full blown network prior to communicating. The changes to the protocol stack are summarized in Figure 4-10.

Block diagram with two groups, Linux software system and Management Plane. The uppermost group includes a box labeled WAVE Management Entity, next two stacked boxes labeled MLME Interface and PLME Interface, next to stacked boxes labeled Data Interface, WAVE Upper Medium Access Control Layer, DSRC Lower Medium Access Control Layer, Atheros Hardware Abstraction Layer, and Mini-PCI Bus Interface Driver. This group is separated by a dashed line from a box labeled Atheros AR 5006X Chipset and connections to a Data Plane and RF Antenna.

Figure 4-10 DSRC Radio POC Architecture

The Atheros Mini-PCI radio card is shown in Figure 4-11.

 Photograph shows a circuit board with interface for a computer slot.

Figure 4-11 DSRC Radio Mini-PCI Card

WAVE Layers

The upper layers of the DSRC/WAVE Radio implement the WAVE part of the overall protocol as described in the IEEE 1609 standards. This includes the overall service management logic that determines how a WAVE radio decides what services from which providers to use, the WSMP, and logic to manage the seven different DSRC channels defined for use in the U.S. by the FCC.

The WAVE layers support two different types of message elements, conventional IP packets, and WSMs. Illustrated in Figure 4-9, this complicates the upper layers since normal WiFi radios simply pass incoming packets to the IP stack provided by the OS. In the case of IP communications, the VII implementation is not particularly different from this, but in the case of WSMs, there is no native function to route packets to the intended applications. As a result, the upper layer WAVE implementation also provides an API that allows the user applications to register as both a User or Provider, for support service and channel decisions (See below), and to send and receive WSMs. This is illustrated in Figure 4-12.

As described in Section 4.3, the DSRC/WAVE protocol uses a CCH and SCH interval concept (See Figure 4-2). By requiring that any radio monitor the CCH during the CCH interval, the system assures that any given radio is tuned to the right channel at the right time to hear important messages regarding safety and service announcements. A DSRC/WAVE radio may operate as either a User or a Provider. While both a User and Provider operate the same from a message communications perspective, a Provider is also able to issue a special type of message known as a WSA. The WSA is broadcast on the CCH during the CCH interval to announce or advertise the services that the provider is offering, and indicates which DSRC channels these services may be found. In general, OBEs operate in the User mode, although this is not a requirement.

The WME is responsible for receiving any WSAs and for deciding which channel (if any) to use during the SCH interval. This is done on the basis of what services the OBE applications have registered for, what services have been advertised by RSEs and the relative priority of those services. The WME also interacts with the Security Services (See Section 4.5.3.4) to verify the digital signatures of any received WSAs and in Provider mode, to digitally sign any outgoing WSAs.

The Channel Coordination Layer (CCL) is responsible for controlling the channel used by the radio, and for routing messages into queues for the channels on which they are intended to be sent. This is a key function since the radio must be synchronized to all other radios so that the CCH and SCH intervals line up, and messages intended for a specific service must be held until the radio is tuned to that channel. The channel switching operation is synchronized to the Pulse per Second provided by the GPS receiver.

Diagram with four levels. The uppermost level is Application. It connects on the left side down to WME API and on the right side down to Socket API. WME API connects down to WME and further to UMLME. Configuration Parameters feed the WME from the left. The Socket API feeds down to WSMP on the left and TCP/UDP IP Stack on the right, and further to WAVE Upper Medium Access Control (MAC) Layer, which is also connected to UMLME on the left.

Figure 4-12 WAVE Upper Layer Software POC Architecture

4.5.1.4 Security Accelerator

The IEEE P1609.2 Security Protocol requires the use of ECC. This approach has a significant advantage in that it results in substantially smaller keys for a given level of security compared to other systems (e.g. RSA keys). However, being an asymmetric operation, and being relatively new, the software based solutions for encryption and decryption are slow and non optimal.

A typical VII vehicle is expected to be in range of about 223 other vehicles under worst case load situations (Typical eight-lane freeway, vehicles placed in all lanes at 2 m spacing, each vehicle 5 m long; range of 100 m forward and behind; ≈28 vehicles per lane; 224 total vehicles within 100 m of each other). If every message is signed, and every OBE sends Heartbeat messages at 100 ms intervals, each OBE will be required to encrypt 10 messages per second, and decrypt 2230 messages per second (10 per second from each of the other 223 vehicles). As a result, the worst-case security processing load is about 2240 operations per second.

It was decided that this represented a significant processing load on the OBE and might impact other software functions. As a result, the OBE was configured to provide a Mini-PCI slot that was used to support a hardware accelerator specifically designed to perform ECC operations.

The Crypto accelerator card (also known as the HPSAM) is shown in Figure 4-13. This card contains two special purpose chips. One runs the Peripheral Component Interconnect (PCI) bus interface, and the other is a high speed Field Programmable Gate Array (FPGA) that executes the ECC functions. A special software driver resident on the OBE provides a software interface that allows the OBE Security Services to pass byte fields for encryption and decryption to the accelerator card.

The HPSAM was specified to support up to 2500 ECC operations per second.

Photograph of a circuit board is labeled to indicate Security IP core and PCI bus IP core. Photograph provided by Kapsch TrafficCom, Inc

Figure 4-13 HPSAM Security Accelerator Card

4.5.2 POC OBE Software Architecture

As shown in Figure 4-14, the OBE uses shared services architecture. This means that key services expected to be used by most applications are provided as resources in the OBE. Any application needing these resources can then make use of them through simple software interfaces. Since many VII applications involve similar kinds of data and operations, the shared services approach avoids the need to implement these functions within each application.

Block diagram shows services and applications that are grouped with the OSGi framework and Java virtual machine and libraries at the Native API level, routines, services, and libraries grouped at the operating system level to the Linux kernel, and device drivers at the bottom for the various cards and interfaces.

Figure 4-14 OBE POC Software Architecture

The most important OBE services are described in more detail in the following sections.

4.5.3 OBE Software Services

A description of the use of the Open Services Gateway Initiative (OSGi) framework and the description of the major OBE services are provided in the following sections.

4.5.3.1 Open Services Gateway Initiative Framework

The OSGi Service Platform is an open Java-based component framework that allows the rapid and safe installation and operation of services and applications (known as “bundles” in the OSGi specification). The OSGi Service Platform provides a service oriented architecture that allows applications to dynamically discover and use services provided by other applications running in the same environment. This service oriented architecture allows OSGi applications to be much smaller than conventional applications because they do not need to implement all of the various services they might otherwise need (e.g. positioning, security, etc).

Starting, stopping and updating bundles can be performed without the need to restart the system. The OSGi architecture allows the operator to control the Service Platform in fine detail by using a model where operators can ensure that their required policies are implemented.

As shown in Figure 4-14, the applications and most higher level services were implemented as OSGi bundles. This allowed very rapid re-configuration of the OBE using different applications, and also simplified software development and system integration by making installation of new versions of these higher level software bundles very fast and simple.

Although not tested during the POC program, the OSGi Framework also allows remote installation and control of bundles. While this was a key, longer term element of the rationale to use this system, it is not necessarily an essential part of the deployment architecture. Ideally, in the longer term, use of this framework could allow the OBEs to be remotely managed and thereby acquire new capability without requiring the vehicle to be brought in for servicing. Adoption of this approach would be at the discretion of the automakers.

4.5.3.2 Network Services Enablers Subsystem

The Network Service Enablers subsystem simplifies the communications process between OBE applications and network side services.

The VII system supports a variety of different modes of communications tailored to specific types of system functions. In addition, the DSRC system including security can involve rather complex processes from an OBE application perspective, and because the vehicles are moving through the network with only intermittent connectivity, there is no way for a network service to know how to get a message to a particular vehicle. Also, the OBE is designed to support a variety of simultaneous applications, and these applications need to share the communications resources. Since the applications are all developed by separate organizations, it was decided that the OBE should provide a common mechanism for sharing the communications resources. To do otherwise would have likely resulted in a variety of conflicting approaches developed by the independent application teams.

The Network Service Enablers subsystem effectively creates a single interface for all applications to use to access services from the DSRC Radio and to use the DSRC security functions. It also supports an Internet style back-end system that allows the addition, modification and combining of services at a service provider without requiring changes at the OBE. This is important as a scalability feature. As described earlier, this Service Oriented Architecture (SOA) is well known in the internet, where connectivity is stable, but it was not known if the same approach would be feasible in the intermittently connected concept of the VII.

The POC Network Services Enablers architecture is shown in Figure 4-15.

Block diagram with seven elements. Two elements, Vehicle Applications and OBE Communications are to the left an element that groups RSE and NAP on an upper path and PDS on a lower path. The upper path continues to the element Transaction Service Manager, and has three branches: Access Policies, Service Providers, and Data Subscriber Service Providers. The connection from OBE Communications Manager to PDS to Data Subscriber Service Providers is dashed, indicating it is indirect.

Figure 4-15 POC Network Services Enablers Architecture

The Network Services Enablers subsystem is composed of two primary elements. In the OBE, a “middleware” system known as the Communications Manager provides message routing and service/message registration and security functions for the OBE applications. In an analogous manner, at the other end of the system, outside the access gateway, the Transaction Service Manager (TSM) aggregates services from different sources into a single suite of services. The Communications Manager and TSM also work together to bridge between service sessions as the OBE-equipped vehicle moves out of coverage for one RSE and then re-connects some time later at another RSE. This “hand off” between RSEs is a new, key feature of the VII system that has not been done before in the context of Internet style services, and it substantially reduces the impact of the intermittent connectivity coverage characteristic of the VII POC architecture.

4.5.3.2.1 OBE Communications Manager

The OBE Communications Manager (OCM) facilitates the interaction between in-vehicle applications and external services by providing means to ensure the transparency and appropriate security of communication from an application perspective. The goal of the Communications Manager is to abstract applications from any details related to communications and communication security, with the understanding that the more functionality is encapsulated inside the service, the easier is the task of writing applications. By isolating the communications process from the applications, they are shielded from any changes in communications protocols and infrastructure.

As shown in Figure 4-16, the Communications Manager is composed of three primary elements: the Application Manager, the Message Manager and the Transport Channel. The Communications Manager has two operational modes from an application perspective. First, it provides an interface through which applications can register for services and the messages associated with those services. This administrative process includes establishing security credentials, and communications preferences, and it is typically performed once during the application start up. Second, during regular operation, the Communications Manager interacts with the DSRC Radio and notified applications when any of their registered services are available (i.e. they have been advertised by an RSE), and passes messages between the applications and the DSRC Radio. One key aspect of the Communications Manager is that it is multi-threaded; it is thus capable of supporting multiple applications. It also effectively serves as a multiplexer, by collecting messages to be sent from many applications and distributing received messages to those applications.

The detailed operation of the elements is described in the following section.

Block diagram showing components of the OBE Communications Manager., with three elements. One element on the top left, Application Manager, includes Application Notifications and VII Service Discovery. This block feeds up to an outside block, Applications (Java). Another element on the top right, Message Manager, includes Message Queue, Message Compression, and Message Security. It has a two-way connection to Applications (Java). A third element, Transport Channel, includes Link Management, End-to-End Security, and Mobile Network Abstraction. The entire block element has a two-way connection to Message Manager, while Link Management feeds up to Application Manager. Link Management is fed by WAVE Management Entity, and Mobile Network Abstraction is fed by TCP/UDP IP Stack, while the entire Transport Channel element is fed by WAVE Short Message Protocol, all part of an outside block.

Figure 4-16 POC OCM

Application Manager

The Application Manager provides a set of interfaces to the OBE applications to set up and manage services. At start-up, the OBE applications register with the Application Manager and provide both security information and the PSIDs for the services they plan to use. The Application Manager informs the Security Services about the needs of the application, and sets up the DSRC Radio to respond when those services are available.

The Service Discovery component of the Application Manager is designed to facilitate dynamic interaction between various external services and in-vehicle applications. The current discovery mechanism is based on a PSID number that identifies a type of service. This type of service is quite generic and does not provide specific service-related details to applications. A second number, the Provider Service Context (PSC), is a vendor-specific identifier used to denote specific protocols, service versions, or other key service-related information.

As shown in Figure 4-17, the Application Manager function within the Communications Manager provides an interface that allows applications to query and discover the required services in a generic way. Having an intermediate layer provides a uniform way for the discovery of services that simplifies the design of in-vehicle applications.

Block diagram showing connections between six elements. At the top of the diagram are three elements, Application 1, Application 2, and Application 3. These elements have multiple two-way connections to the element Communications Manager. Communications Manager has multiple two-way connections to OBE DSRC Radio at the bottom of the diagram, which has two-way connections to the side to element RSE DSRC Radio.

Figure 4-17 Communications Manager Service Discovery Scheme

Message Manager

The Message Manager functions within the Communications Manager and handles incoming and outgoing message traffic for applications using the WSMP through a socket interface similar to that provided by the Linux OS for IP packets. On the outbound side, OBE applications provide messages to the Message Manager via the socket interface. The Message Manager then processes the message according to the security operations defined by that application at registration, and then passes the message to the Transport Channel for transmission (See next section). On the inbound side, the Message Manager receives messages from the Transport Channel, passes them to the security libraries for verification and/or decryption, and then delivers them to the socket assigned to that message type. The socket assignments are based on the PSID. Each WSM includes the PSID and this is used to bind the messages to the application via the socket assignment.

The Message Manager is able to handle messages in parallel since the flow of messages may be such that messages are received before prior messages have been fully processed. Lastly, the Message Manager avoids delivery of duplicate incoming messages to registered OBE applications by saving the message ID for received messages and comparing newly arrived messages with the message ID cached by the Communications Manager. This feature can be selected by an application during initial registration.

Transport Channel

The Transport Channel function within the Communications Manager interfaces to the lower layers of the system, specifically the various DSRC Radio interfaces and security libraries.

In its simplest operations, the Transport Channel receives messages from OBE applications via the Message Manager, and provides them to the appropriate DSRC Radio interface for transmission, and performs the converse operation for messages received from the DSRC Radio. This operation is exclusively performed on WSMs since IP-based message traffic uses the IP stack provided in the Linux OS.

Under some IP exchanges; however, the Transport Channel is responsible for setting up a special type of IP session. Using the Host Identity Protocol (HIP), the Transport Channel exchanges a set of handshake messages with the TSM. These exchanges result in a secure and anonymous session identity known only to the Transport Channel and the TSM. Using this session, IP data exchanges take place as usual using the Linux IP stack. However, if the DSRC link is lost, for example, because the OBE host vehicle drives away from the RSE, the session state is maintained at the TSM and in the Transport Channel for a period of time. If the OBE encounters another RSE before the session times out (the timeout was about 10 minutes in the POC), then the Transport Channel and the TSM re-establish the session, and agree on a new secure session identifier. This allows the system to pick up the data exchange where it left off only using the packet routing for the new RSE (which is the new network location for the OBE). The result of this approach is that the OBE can carry out long data transfers or extended transactions with the TSM even though the OBE is coming to and leaving many different RSEs. This hand-off mechanism effectively creates a semi-seamless network usable for long transaction even though the network does not provide geographically-continuous connectivity to the OBE.

The Transport Channel also interacts with the security libraries (See Section 4.5.3.4) to set up secure links for IP-based applications. This means that the application simply needs to provide security information at registration instructing the Transport Channel that a secure link is needed, and the Transport Channel takes care of the rest of the process. Through this architecture, the applications (and application developers) do not need to include extensive code to interact with the security system, and in fact can successfully use the security system with little or no knowledge of the details of its operation.

4.5.3.2.2 Transaction Service Manger

The TSM is the server-side or off-board portion of the POC Network Services Enablers architecture. It acts as an intermediary for network-based transaction services communicating with applications on the OBE. An intermediary is necessary to act as an access control point as well as a queuing station to mediate asynchronous communication. The TSM also serves as the integration point for cooperative services defining an extensible service framework that may prove useful well beyond the scope of the POC.

The TSM is shown architecturally in Figure 4-18.

Block diagram with six elements. The central element at the top is Service Enterprise Bus, which includes Application Requests and Responses, Message Queue, and Service Requests and Responses. An element on the left, Mobile Application, has a two-way connection to an element that includes Compression and Mobile Network Abstraction, and further two-way connection to Service Enterprise Bus. An element on the right, Transaction Services, also has two-way connections to Service Enterprise Bus. At the bottom of the diagram, Access Control Policies feed up to Security Access and Control, which includes UDDI Service Registry, overlaying Service Enterprise Bus.

Figure 4-18 Transaction Services Manager

The TSM provides Mobile Network Abstraction, Service Orchestration, Message Queuing, and Security Interaction capabilities to ease the development and integration of traditional web services.

Mobile Network Abstraction

Mobile Network Abstraction isolates the problems of dynamic routing and network session discontinuity from the transaction services. All transactional communications between the VII system and the network services are facilitated by Mobile Network Abstraction.

Mobile Network Abstraction provides two related services. First, it works with the Communications Manager in the OBE to bridge service gaps of reasonable size, and second, it maintains the ability to route packets despite changes in the vehicle's location in the RSE network with consequential changes in the OBE's IP address. This overcomes the key limitations of the IP relative to mobile operation. These limitations arise because during the initial design of the IP, almost all machines capable of supporting IP were physically large and therefore stationary. There was no obvious need to design a system capable of supporting mobile connections, and the problems inherent in such an approach would realistically serve no purpose while introducing considerable complexity and processing requirements to their designs.

In this context, the standards organizations made one decision in particular that made supporting mobile systems rather difficult: the IP address concept overloaded the notions of location and identity, in that, “who am I” and “where am I” were both features of the IP address as defined in Internet Protocol Version 4 (IPv4). Unfortunately, in mobile systems, it is an absolute necessity to separate identity from location and routing information. Mobile Network Abstraction implements the server-side of the HIP described earlier as part of the Communications Manager Transport Channel.

Service Orchestration

Service Orchestration is the concept that multiple network-side services can participate in a single OBE application request. A request can be processed by or affect one or more services. A purchase request is one example. A single request to purchase an item may interact with an inventory service, a payment service and a shipping service. In fact, the inventory service could be interacted with twice; first, to verify availability of an item and second, after payment is validated to update the number of those items. This approach is illustrated in Figure 4-19.

Block diagram with three elements. The central element is Orchestrator Engine, which includes Context Management with subordinate Transaction Management in one box, and Flow Control in another box. At the left, the element Requesting Services has a two-way connection, and at the right the element Responding Service has a two-way connection to the Orchestrator Engine.

Figure 4-19 Service Orchestration

Another important aspect of orchestration is reuse. Not only can customers of one service provider make use of those services, but different service providers (or service provider brands) can orchestrate those same services in other ways.

In the POC implementation, this orchestration is managed by a subsystem known as the Enterprise Service Bus (ESB). The ESB provides two main functions. First, it manages flow control that defines which services are called and when. Second, it provides context management, so that services participating in a single request participate within the same context. Exceptions or faults that occur at any individual service are reported to and managed by the context. A specialized case of this context coordination is transaction management. Transactions across web services are more complicated than transactions within a single database, for example. Multiple systems may be involved crossing thread and connection boundaries.

Message Queuing

The TSM message queuing service manages the exchange of data between service consumers (OBE applications) and service providers (Network User/Providers). The requests and resulting responses are transformed and controlled by a series of workflows residing in the service orchestration layer. Additionally, service access is arbitrated by an authorization mechanism controlled by policies that are expected to be determined in the future. Message Queuing is a mechanism internal to the TSM which is fully isolated from services and applications. In accordance with the design goal of using open standards, the message queue implementation uses Java Message Service (JMS). Messaging with external entities is conducted using Simple Object Access Protocol (SOAP) over Hypertext Transfer Protocol (HTTP).

4.5.3.3 Human Machine Interface Manager

The HMI Manager arbitrates HMI resources amongst applications and provides a toolbox of graphical components to support the user interface for the applications. The HMI Manager is capable of presenting both visual and audible information, including warning symbols and signals.

By defining a common HMI service for the OBE, the HMI is uniform across applications and is usable in a wide variety of vehicles. In addition, all applications are isolated from the HMI design, so the application developers need not be concerned with the details of the specific HMI implementation. This approach simplified the POC development and testing, but the HMI is not expected to be standardized in the actual deployment environment.

POC HMI Manager Architecture and Operation

The HMI Manager architecture is shown in Figure 4-20. The HMI Manager is constructed around a set of core graphical operations and structures known as widgets and templates. These structures define basic screen layouts into which application-specific content can be placed. These structures interact directly with the basic HMI drivers and tools provided by Java.

The HMI Manager provides a plug-in interface that allows each application to define, in a single file, its unique graphical elements such as icons and labels, and application-specific text elements.

In operation, an application interfaces with the HMI Manager via the HMI Manager API. This provides a set of tools that the application can use to write context-specific information into the template, and to specify content from the plug-in to be used on a specific screen. By separating the changeable context-specific content from the more stable graphical elements, the application interface and graphical operations are much simpler. Essentially, the application specifies the content and the HMI Manager constructs the view.

The HMI Manager also arbitrates the use of the HMI resources between applications as described in more detail below.

Block diagram with three elements. The central element is HMI Service, which includes HMI manager and related routines and Application Plug-ins for various applications. At the left, the element labeled Applications has a two-way connection to HMI Service. At the bottom, the element labeled JVM has a two-way connection to HMI Service.

Figure 4-20 POC HMI Manager Architecture

Figure 4-21 provides a brief look at the operation of the HMI Manager. In this example, the “View 12” template is being used. The application specifies, via the API, that it wants to display a sign type (“SignID= 0x24”), and specifies two textual variables that relate to that sign type. The HMI Manager Supervisor goes to the application's plug-in and retrieves the content of sign type “0x24”, which consists of a graphical icon showing a curve warning symbol. Also retrieved are the text strings “Curve Ahead” and “Recommended Speed 40 mph” and the template type to be used. In this case, the plug-in has combined variable data provided by the application (“Curve Ahead” and “40”) with fixed text strings associated with that sign type. The View Builder then compiles the view by placing the icon in the proper location of the template and writing the text strings into the defined locations of the template. The result is the display shown in Figure 4-21. Note, that if the curve had been the other direction, or the recommended speed had been different, the application would have provided this information, and the resulting sign, based on the same processes and template, would be changed to conform to the specific details provided by the application.

Block diagram with four elements. The central element includes Supervisor, Libraries, and View Builder. It feeds the element Signage Application at the left and the element Signage Plug-in at the right. It also feeds down to a caution message with icon and instructions.

Figure 4-21 HMI Manager Example

Prioritization Scheme for Displaying Messages

As mentioned, the HMI Manager also provides arbitration of the HMI resources. This is necessary because the OBE supports many simultaneous applications, and there is only one HMI. As a result, the HMI Manager must decide, based on priority, which application gets to use the display (or audio) at which time.

HMI arbitration is performed using a prioritization scheme developed by the International Standards Organization (ISO), ISO 16951 uses a Priority Index calculation. The Priority Index approach uses measures of criticality, urgency, user context and scenario to compute a priority of any given message. This priority value is compared to the priority of other applications seeking to use the HMI, and the HMI resource is awarded to the application with the highest priority. This approach is shown conceptually in Figure 4-22.

Block diagram with four elements. The element labeled Application Pool includes the various applications and leads to the right to the element that lists Priorities in numerical order. Application Pool also leads downward to an element labeled Application in Focus, which also links to the listed priorities, which feed to the element labeled HMI Display at the right.

Figure 4-22 HMI Display Prioritization

Priorities are defined in a special HMI Priorities properties file, which is accessible to all applications. At startup, each application reads this central property file to obtain the application-specific priorities for each screen the application will eventually display.

For each screen, the HMI Priorities file contains one priority value for each of the following:

The Priority Index is calculated by the following formula:

Priority = Weight_1 * User Initiated + Weight_2 * Criticality + Weight_3 * Urgency + Weight_4 * Relevancy

The relative weights are defined by a configuration file and may be adjusted to fine tune the way priorities are determined. When multiple HMI requests are made, the HMI Manager computes the Priority Indexes for each and based on their relative values, awards HMI access to the application that has the highest Priority Index. This approach assures that the prioritization is context-specific rather than simply giving a particular application higher priority even when the use of the HMI is not high priority. This method was adequate for POC but is not likely to be the method of choice for final deployment.

Advisory Message Template

Figure 4-23 illustrates the Advisory Message template. This image represents a general version of a road sign for a class of signs. For example, an orange diamond represents all signs describing Work Zones and Road Work. “Text Line 1” displays the major “TITLE” of the road sign. “Text Lines 2-5” are utilized to display specific context of the sign. In another example, the image may represent all R2 Speed Regulation Signs, “Text Line 1” displays “SPEED LIMIT,” and the following text lines will convey the actual speed limit of “50 MPH” (See Figures 4-24 and 4-25).

The screen areas labeled “btn” on the right hand side of the screen are available to allow the user to select which advisory screen they wish to display, as defined by an appropriate category icon. This is useful when multiple advisory signs have been received. The highest priority sign will always be displayed automatically, but the user may then choose to view a lower priority sign manually.

The screen area labeled “btn1” through “btn5” across the top of the screen are populated with button icons of active applications with HMI content. The user can shift the display between applications by pressing the appropriate buttons.

The template indicates the location of buttons 1 through 5 across the top row, and two columns of buttons down the right side of the diagram; an image and text line in the next row, and text lines 2 through 6 below. Diagram provided by Delphi

Figure 4-23 HMI Road Advisory Template

Examples of this template in use are shown below in Figure 4-24 and 4-25.

Rendering of a message displayed showing top buttons labeled Sign, Nav, Toll, Gas, Park, information icons on the buttons down the right side, a warning symbol and text appropriate for a work zone on the highway. Rendering Provided by Delphi

Figure 4-24 Road Work Advisory Example


Rendering of a message displayed showing top buttons labeled Sign, Nav, Toll, Gas, Park, information icons on the buttons down the right side, a speed limit symbol and text indicating the speed limit. Rendering Provided by Delphi

Figure 4-25 Speed Limit Advisory Example

Next Exit Services Template

Next Exit Services Template is shown in Figure 4-26. This template is slightly more complex than the road advisory template since the signs contain more graphical and textual information. Each individual sign is split into services to the left and right of the exit ramp. The two data columns shown in the template are used to place left or right side services as appropriate.

As with road advisories, the button icons across the top indicate the active application and allow the user to select the active application. The buttons on the right allow the user to select the category of advisory message to display.

Each Next Exit Services field is populated with a left or right arrow as appropriate, a small text field indicating the distance to the service and either an icon or text describing the service.

The template design calls for specific types of information to be entered in the bottom portion of the diagram. Diagram provided by Delphi and modified by VIIC

Figure 4-26 Next Exit Services Template

Figure 4-27 shows an example of a screen generated using this template.

Rendering of a message displayed showing top buttons labeled Sign, Nav, Toll, Gas, Park, information icons on the buttons down the right side, and icons representing food and related directional and distance information. Rendering provided by Delphi

Figure 4-27 Next Exit Services Screen Example

Traveler Advisory Template

The Traveler Advisory Template is general in that it can be extended and adjusted to display a wide variety of different types of information. Two of several types of templates and typical example corresponding message displays are shown in Figures 4-28 to 4-31.

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with category and description fields indicated for information input. Rendering provided by Delphi

Figure 4-28 General T/A Template


Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with category and description, and advice fields indicated for information input. Rendering provided by Delphi

Figure 4-29 Driver Advice Template


Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with information input on Festival, including type and ticket availability. Rendering provided by Delphi

Figure 4-30 General T/A Example


Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with information input on Festival, including type and parking availability. Rendering provided by Delphi

Figure 4-31 Driver Advice Example

Off-Board Navigation Template

The Off-Board Navigation Application (OBNA) uses several different templates depending on the state of the application. For brevity, we have shown only a few examples of typical screens. The same template approach described previously, is used for this application. The user must have the ability to choose a destination via a scrollable list of destinations. Figures 4-32 and 4-33 show typical destination setting screens with up/down buttons on the right side to scroll between the pages.

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with selectable information on destinations. Rendering provided by Delphi

Figure 4-32 Destination Set Screen


Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with additional selectable information on destinations. Rendering provided by Delphi

Figure 4-33 Destination Screen (Page 2)

Once the destination has been selected and the system has obtained driving directions, a turn list is provided to guide the driver's maneuvers to reach the destination. This is shown in Figure 4-34.

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with information indicating street names, turn directions, and distances. Rendering provided by Delphi

Figure 4-34 Off-Board Navigation Turn List Screen

The OBNA also provides a route overview screen that shows the entire route on a graphical map display as shown in Figure 4-35. It is useful to note that this screen also includes buttons/icons on the right side to access the turn list (See above) and to update or cancel the route.

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with area map and selection buttons aligned to its right side. Rendering provided by Delphi

Figure 4-35 Route Overview Screen

Toll Payment Displays

The Toll Payment Application displays are very basic with only two general screens. Figure 4-36 shows the screen used to allow the driver to turn the Tolling Payment function on and off. Figure 4-37 shows the screen used to inform the driver when a toll has been paid. This is automatically displayed if the priority is high enough and a tolling payment transaction has just been carried out.

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Toll selected, and a message indicating Automatic Payment On and an On/Off button. Rendering provided by Delphi

Figure 4-36 Toll Payment On/Off Screen


Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Toll selected, and a message indicating $5 toll billed and an OK button. Rendering provided by Delphi

Figure 4-37 Toll Payment Info Screen

Parking Payment Displays

The Parking Payment system behaves similarly to the Toll Payment system, although the screens are slightly more involved. These are shown in Figures 4-38 to 4-40.

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Park selected, and a message indicating parking availability and buttons designated Return and Disable. Rendering provided by Delphi

Figure 4-38 Parking Announcement Screen


Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Park selected, and a message indicating parking fee billed and an OK button. Rendering provided by Delphi

Figure 4-39 Parking Payment Screen

The last parking screen allows for configuration of billing options (i.e. changing the account/ credit card which is used to pay for the transaction).

Rendering of a template showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Park selected, and selectable buttons for payment method and buttons designated OK and Back. Rendering provided by Delphi

Figure 4-40 Parking Payment Billing Selection Screen

4.5.3.4 Security Services

The OBE contains two security elements: the certificate management subsystem, and the Security protocol implementation.

Certificate Management

As described previously, the certificate management subsystem has two elements. The first element interacts with the CA to request and process certificates. The second element controls access to and use of the certificates in the vehicle system.

When an application is added to the OBE, the OBE must verify that the application is legitimate, namely, that it has come from an approved vendor. OSGi supports the mechanism known as “code signing” to meet this requirement. Using code signing, a platform can be required to accept only certain applications. This provides individual OEMs with control over what applications their OBE will run, thus greatly improving system assurance. As part of the installation process, the application registers with the Security Services. The Security Services then generate an application and OBE-specific key pair, which can be used to request certificates for that specific instance of that application. This key pair, and all other cryptographic and security material for the application, is stored in an application security file on the OBE known as a WAVE Security Context (WSC).

Based on the WSC information, the CM then requests the appropriate security credentials from the CA. In the case of an anonymous application (one that will send anonymous public messages), the CM will interact with the Authorizing Authority and Certifying Authority as described in Section 4.11. In the case of an identified application (one that sends private messages), the CM would go to the CA responsible for providing identified certificates. The CA verifies that the OBE in question is entitled to run the appropriate application before issuing the certificate.

The CM continually checks the state of the certificates for each active application, and routinely requests replacement certificates when older certificates expire or are found to be revoked.

For each anonymous application, the CM maintains a pool of anonymous certificates provided randomly by the Certifying Authority. As the application sends messages, the CM randomly rotates the certificates in the pool, so that the same certificate is not used again and again.

The CM maintains its own security credentials as well. These are used to interact with the identified Certifying Authority (or Authorizing Authority) during the various certification operations.

In addition to the certificate management operations described above, the CM is also responsible for assuring that the OBE has not been tampered with in any way. The first line of defense in this area comes from the installation process. If any attempt is made to install an unauthorized application, it will fail the code signing check, and the installation will stop. In addition, the OBE will need to authenticate itself each time the system starts up. In this approach, the OBE is not trusted by the security system. Instead, the OBE must provide information attesting to its legitimacy, and when this information has been verified as correct, the security system will release the security credentials. By this approach, any other participant in the system that receives a signed anonymous message can be assured that the message originated from an OBE that was able to successfully prove its authenticity, and that no unauthorized software was installed. For example, in a simple implementation, the vehicle (OBE) might provide the Vehicle Identification Number (VIN) and if the VIN matches that stored in the CM, then the CM will allow the system to operate. If the VIN has changed, (e.g. the system has been transported to a different vehicle), then the system will refuse to unseal the keys, and the applications will be unable to sign any messages. Due to the OEM unique nature of this approach, this system was not implemented in the POC. In future developments, it is expected that this type of verification process will be elaborated and reflected in a requirement for OBE validation.

OBE Security Protocol

Once the identified and anonymous certificates have been established in the OBE, and the OBE has established its validity to the CM subsystem, the OBE Security Protocol implementation may use them to sign, and encrypt outgoing messages in accordance with the IEEE P1609.2 DSRC Security Protocol. This same standard is used to verify, authenticate and decrypt incoming messages.

The mechanisms used in IEEE P1609.2 are based on general Public Key Infrastructure (PKI) security principles. The mechanics of this will not be described here.

Of relevance to this subsystem description are the mechanisms used in IEEE 1609.2 to counteract various threats. Specifically, in addition to basic signing, the IEEE 1609.2 protocol includes “scope” elements that restrict the time, geography and function associated with a signature. To accomplish this, the IEEE P1609.2 headers include transmission time, transmission location and the PSID of the originating application. If the message is subsequently received at a different time or at a different location, then it will be considered invalid. In addition, if the message is sent from an originator who is not authorized to send messages of that type, then the PSID of the message will not match any PSID in the certificate, and the message will be considered invalid.

Also included in the OBE Security Protocol are mechanisms for localized encryption. This mechanism, known as VII-Datagram Transport Layer Security (V-DTLS) is used to encrypt public and private data sent from an OBE to a roadside unit. The function is specifically intended to protect probe data messages from being intercepted from the radio link. While the content of this data will eventually be made public, there is a risk that a bystander on the roadside could intercept the delivery of this data and correlate the content with a particular vehicle (for example, if there is only one vehicle passing the roadside unit, it would be obvious where the data came from). Since probe data carries a history of speeds and locations, the information could compromise privacy when tied to a specific vehicle. For this reason, V-DTLS provides a mechanism for encryption over the radio link.

OBE Security Services are a critical element of the Security subsystem. These functions are used to manage the security certificates in the OBE on behalf of the radio and the various applications, and to perform the various message security operations such as signing, verification, encryption and decryption.

The POC Security Services architecture is shown in Figure 4-41. This figure illustrates that the Security Services software operates at multiple levels in the OBE software system. This requirement derives from the fact that different users of the Security Services exist at different levels in the software system. Specifically, the DSRC Radio code, and the Crypto processing card driver exist in kernel space, and the various POC applications are implemented in Java in user space. In addition, the system was designed to be available to native applications (e.g. C++ code applications) running in user space.

As a result, the code to implement the Security Services has complex interfaces between these levels, and some services, for example the Secure Messages and CM function use code implemented in both C++ and Java.

Block diagram with three elements. The uppermost element is the Application Framework (OSGi/Java) which includes Java Applications/Communications Manager leading down to Secure Messages on the left and Certification Management Application (Telecordia) leading down to Certification Management on the right. The next element down is User Space (C++) which includes several routines. One the left, Secure Messages connects to its counterpart in the element above, and on the right Certification Management connects to its counterpart in the element above. Other routines include Certification Stores, Applications Caches, Secure WSA's and Application Credentials. Also Time-Critical applications lead to Secure Messages in this element, while Cryptography Software connects to Cryptography Driver outside the element. The lowermost element is Kernel Space (C), which includes WME (Radio WO) leading to WSA Processing. A dashed connector leads from the uppermost element to the item Secure WSAs in the second element, and continues as a solid connector to WSA Processing in the lowermost element of the diagram.

Figure 4-41 POC Security Services Architecture

The Security Services operate as follows: the Security system is initially set up with a root certificate that will be recognized by the CA (See Section 4.11). When an application registers for Security Services, the Application Credentials element determines that the application has no certificates in the Certificate Stores. This causes the CM to contact the CA (when DSRC Radio connectivity is available) to obtain certificates for the application. To do this it uses the pre-stored root certificate to authenticate itself and to establish a secure session with the CA. When the CA returns the application certificates, the Certificate Manager stores them in the Certificate Stores.

Inbound Security Operations

When a WSA is received, it is sent to the Secure WSAs element that checks its signature. If the signature is valid, the Secure WSAs element notifies the radio, and the radio joins the advertised service (assuming there is a service for one of the registered OBE applications). This activity involves communications between kernel space (where the radio operates) and user space (where the Security Services operate). This communication is necessary because otherwise all of the code used to verify signatures would need to be duplicated in both user space and kernel space.

When a signed or encrypted WSM is received, it is sent to the Security Services by the Communications Manager for verification or decryption or both (depending on the message). For a signed message, the Secure Messages element checks the signature and verifies that the certificate used on the signature is also valid (by checking it against the CA certificate). If the signature is valid, Secure Messages notifies the Communications Manager and the Communications Manager passes the message to the application in which it was intended. For encrypted messages, it simply decrypts it and passes the decrypted message to the Communications Manager.

At times, an application may choose to access the Security Services directly. This is primarily used when an OBE application has established a secure IP connection with another remote application. In this situation, the OBE application simply passes the message to Secure Messages and the result is passed back to the application.

Outbound Security Operations

If the OBE is operating as a Provider (See Section 4.3), it must broadcast signed WSAs. In this case, the WSA is formed by the radio and it is then passed up from kernel space to the Secure WSAs element in user space for signing. In this case, Secure WSAs uses a signing certificate it has obtained from the CA. The signed WSA is then passed back to the radio for broadcast.

When an OBE application wants to send a signed or encrypted WSA, it passes the message to the Communications Manager which passes it to Secure Messages in user space. Depending on the needs of the application, Secure Messages either signs the message or signs and encrypts it, and passes it back to the Communications Manager for transmission.

Similar to the inbound leg, OBE applications can also use the Security Services directly to sign or encrypt WSMs and IP packets.

4.5.3.5 Positioning Service

The OBE Positioning Service provides a suite of positioning services to the other on-board services and applications based on the output of the external (primary), or internal (secondary) GPS receiver. The Positioning Service uses position information from the GPS receiver as well as extrapolated positions that it computes every 100 ms when requested.

The Positioning Service provides two separate APIs: one Java API, implemented as an OSGi bundle, and a native C/C++ API, implemented as a native library. A GPS daemon is used to share the access to the GPS port between at least two application processes: the Java Virtual Machine (JVM) process that runs the VII applications and at least one native process that runs the native applications. The POC Positioning Service architecture is illustrated in Figure 4-42.

Block diagram with four elements. An element on the left is labeled JVM/OSGi and includes Java Application and OSGi Positioning Service. An element on the right is Native Application. Below these, an elements labeled Linux OS includes Native Positioning and GPS Daemon. Below this is an element labeled GPS Receiver. OSGi Positioning Service has a solid connector to GPS Daemon, which has a solid connector to GPS Receiver.

Figure 4-42 POC Positioning Service Architecture

Positioning Service Functions

The Positioning Service is available as an OSGi bundle, with an activator class. The API is built upon a main Java Interface (implemented by the activator), a set of classes representing the data provided or handled by the positioning service and a set of listener interfaces to be implemented by the application.

The Positioning Service Interface provides the following features:

These features are based on three basic data classes: the Point, Position and Area classes:

Vehicle Position

The Positioning Service makes the assumption that the position of the GPS antenna defines the position of the vehicle. This is called the vehicle's reference point. While the position is always derived from the GPS receiver, there are several different types of position solutions, depending on how the position was computed: GPS, GPS-augmented with Differential Global Positioning System (DGPS) or dead reckoning. In addition, on request, the Positioning Service may extrapolate a position every 100 ms, in which case the type is an “extrapolated” type derived from the three types listed above.

The Point Class

The Positioning API defines a Point Class. Each Point object represents one point on earth. A point cannot move. The longitude, latitude and altitude items are final once the object has been created. A point has no direction, speed or accuracy information.

The Point Class defines methods for doing operations on points, such as obtaining the latitude, longitude or altitude of the point, computing the distance between two points, obtaining the true north azimuth of the line connecting two points, and so forth. The Point Class also provides a method for determining if a specified point is inside or outside a polygon specified by a set of three or more points. In the POC project, this was used for payment events in the Tolling and Parking Applications, for relevance checks in the Advisory Message (Signage) Application, and was used for security certificate checks.

The Position Class

The Positioning API defines a Position class to extend the Point class with vehicle and measurement information. Each Position object represents the vehicle positioning information at a given time. It is a Point object, since the vehicle is at one point on earth, however it also has speed, heading information (since the vehicle might be moving), and it has accuracy information since the actual measurement method is known to have errors.

A Position object cannot be modified. The coordinates, heading, speed, measurement time and accuracy cannot be changed. When a new vehicle position is generated, a new Position object is created. Therefore applications can keep a reference to a Position object without incurring time-sensitive abnormal behavior.

The Position class provides methods to get vehicle speed, direction and various accuracy measurements.

The Positioning Service provides two classes of vehicle position:

In addition to simply requesting the position, the estimated position can be obtained by registering a listener. This approach automatically provides the current position each time the position is updated.

Region and Area Class Listeners

Most VII applications are concerned with localizing the vehicle's position relative to physical features in the real world, such as gas station pumps, parking entrances and exits, etc. The applications typically access a map database that defines polygons for each feature of interest. These features are naturally grouped in areas: the gas pumps belong to one gas station; the entrances belong to one parking lot, etc.

This natural grouping is of importance to the Positioning Service for two reasons:

The Positioning Service provides support for registering region listeners. The listener is called when the vehicle enters the region and (optionally) when the vehicle exits the region. A region is a geometric figure that defines an exact portion of space. Regions are described as polygons with Points (See Region and Area Class Listeners) as the vertices, or as a circle with a Point center and radius.

The Positioning Service also defines an Area class that is used to group Region listeners and to notify the registered application when the vehicle is inside the area defined by the regions.

4.5.3.6 Vehicle Interface Service

The Vehicle Interface Service (VIS) provides a common referencing scheme and means for accessing vehicle data. It allows the OBE to be used in a variety of vehicle types without needing to customize each application to interface with each vehicle type.

Figure 4-43 shows the main logical layers of the Low Level Vehicle API Module:

Block diagram with four elements. The top element is Object Management, Layer D; the next element down is Mappings and Conversion Rules, Layer C; the next element down is Low Level Connectors, Layer B; and the bottom element is CAN Bus Access, Layer A.

Figure 4-43 Logical Layers of the Vehicle Interface

4.5.3.6.1 Low Level CAN Framework (LLCF)

Layers A and B were implemented using a code set developed by VW. This is known as the Low Level CAN Framework (LLCF). The LLCF is based on a network CAN driver that abstracts the different bus protocols over standard Linux interfaces (sockets). It defines a new protocol family PF_CAN, similar to the IP PF_INET between the network layer and the socket layer of the Linux Transmission Control Protocol (TCP)/IP stack. Figure 4-44 shows an overview of the POC architecture of LLCF.

Block diagram with ten elements. Three elements at the top are labeled App1, App2, and App3. The next element down is Linux Socket Layer. Below this, an element on the left is labeled PF_INET and an element on the right is labeled PF_CAN, which includes three items labeled BCM, TP20, and RAW with two-way connectors to item RX-dispatcher. This item has a two-way connector to the next element down, labeled Linux Network Layer. This has a two-way connector to the elements at the bottom, labeled can0, can3, and vcan0.  Diagram provided by Volkswagen Group of America.

Figure 4-44 POC Architecture of the Low Level CAN Framework

The POC architecture re-uses the Linux Network and Socket Layers, ensuring ease of use for the application developers, who can access the CAN-bus over standard Linux sockets. The “can0” to “can3” modules are CAN drivers for different proprietary CAN-busses that have been re-worked in the form of network drivers. The RX-Dispatcher is responsible for the forwarding of the data from the CAN-busses to the different protocol modules. Because of the particularities of the CAN addressing it can happen that there are several modules that want to receive one and the same message with a given CAN ID. The protocol modules can register in the RX-dispatcher for which CAN IDs they want to listen to. The RX-Dispatcher then forwards the received frames to all entities that have registered listeners. In the POC, the VIS operates on behalf of the various applications.

The aim of the LLCF architecture is to make the communication with the CAN-bus as close as possible to standard TCP/IP communication over sockets.

4.5.3.6.2 Vehicle API

Because the OBE applications are Java bundles running in the OSGi Framework, it is necessary to provide a Java API so that these applications can efficiently and easily access the information provided by the LLCF. The Virus Application Programming Interface (VAPI), also developed by VW, runs on top of LLCF.

As shown in Figure 4-45, the VAPI is separated in two parts, the VAPI Daemon (server) and the J-VAPI Client Stub. The VAPI Daemon works as a native application on top of the LLCF. The J-VAPI Client Stub provides Java API to access the data, which VAPI Daemon handles from the LLCF and the underlying CAN network.

Complex diagram with flow chart, circuit diagram, and front aspect of a passenger vehicle. The circuit diagram is labeled Car Gateway and includes the Client, which connects to the VAPI Daemon. This connects to the left to the profile.xml and chooser.xml and J-VAPI Client. It also has two connectors downward. One connector goes to the GPS HW, and on to the GPS antenna outside the element. The other connects to LLCF and CAN HW items and to other branches outside the element. Diagram provided by Volkswagen Group of America.

Figure 4-45 VAPI Architecture

The VAPI Daemon works on the device, in which the vehicle network is attached. The VAPI Daemon has a VAPI Profile (shown as “Profile.xml” in Figure 4-45), which it loads at its startup, and this profile uniquely specifies what vehicle sensors are accessible through the VAPI, thereby tailoring the VAPI to the specific vehicle in question. The VAPI defines three logical elements in describing the VAPI Profile:

All of this information is mixed in a single VAPI Profile Extensible Markup Language (XML) file, which uniquely specifies what vehicle sensors are supported on a given vehicle. There is a tool, called Profile Generator, which can be used to construct this VAPI Profile XML. The VAPI profile often contains proprietary information about the structure of a given car-maker's CAN-bus implementation. Since this information is considered proprietary, it was made difficult to read by human form via an “obfuscator”. The obfuscator essentially scrambled the profile in a manner that made it effectively impossible for a casual user to observe or copy the content of the profile.

VI Device Management Tree Admin

As described, the VAPI provides a Java API for a specific set of parameters available in a particular vehicle. An additional problem that must be overcome is that the POC uses different vehicle types, and yet the OBEs are all the same. To avoid needing to port each application to each vehicle, the OSGi Vehicle Interface Device Management Tree (VIDMT) was used (Layer D). The VIDMT Admin provides a standard naming scheme for high level access, which then is mapped to the different vehicle ontologies by the OEMs themselves. The mapping can be provided in XML form and will hide the proprietary internal structure of the vehicles. This approach also makes the naming scheme independent from the actual sensor networking; therefore, the sensors can be grouped in a logical way, which will be more easily understandable by the application programmers. The OSGi Vehicle Expert Group (VEG) scheme follows a structural approach using six sub-trees:

Each sub tree contains vehicle component primitives that describe the basic naming scheme for that particular type of component. Using this scheme, the specific resource names for any and all instances of a vehicle component can be consistently named. The Device Management Tree Meta Data provides the basic information about what is available in the particular car, and the application can easily determine the name of each parameter.

4.5.4 DSRC/GPS Antenna

The POC antenna posed a special set of design constraints. The OBE uses both GPS and DSRC Radio functions. Since these two systems operate in different parts of the radio spectrum, each requires its own antenna. In addition, the coverage patterns are very different. DSRC requires good coverage in all azimuth directions but requires very limited range of elevation performance (because the other cars and RSEs are all generally on the same more or less planar road). The GPS system is seeking signals from space, so the GPS antenna must provide good gain in the vertical and azimuth axes. In addition, because GPS signals are sent from satellites in space orbit, the signals are very weak, so it is typical to include low-noise amplifiers at the antenna to limit the noise contribution from cabling to the receiver.

Most DSRC work done prior to the POC used vertical post antennas. These provide excellent radial gain, but they protrude vertically from the vehicle. One of the goals of the VII POC program was to demonstrate that such a system is feasible for production vehicles, and this meant that the antennas needed to have a little impact as possible on the vehicle profile and esthetics.

The solution was to develop a magnetically attached planar antenna module that provided both GPS and DSRC functions. The magnetic mount allowed the antennas to be quickly installed with no modifications to the vehicle, and it also allowed the antennas to be moved easily from vehicle to vehicle.

The antenna design, shown in Figure 4-46 uses a single planar element that has patterns for both DSRC and GPS. The patch itself forms the GPS antenna which is tuned by the location of the feed posts and the corner cuts (See Figure 4-46). This creates a Right Hand Circularly Polarized antenna with good vertical and azimuth gain performance. The DSRC antenna is formed by a ring slot structure etched into the substrate above the GPS patch. This structure has similar performance to the monopole post used in prior DSRC work, but it is co-planar with the surface of the vehicle. The DSRC pattern was optimized by the addition of circumferential slots around the ring.

Diagram shows a rendering of the antenna element on graph paper, x, y, and z labels to designate the antenna planes. Diagram provided MARK IV IVHS, Inc.

Figure 4-46 Planar Dual GPS/DSRC Antenna Element

The measured gain plots for the two antenna structures are shown in Figures 4-47 and 4-48.

Line plot of gain dBi, axial ratio dB over theta degrees for six data sets. The data sets for CP gain at zero degrees, 45 degrees, and 90 degrees track closely in an inverted u-shape, with values ranging from minus four to minus 2 dBi at about theta minus 90 degrees, increasing to about 6 dBi at theta minus 20 degrees, tracking along this value to theta 20 degrees, then fall steadily to values ranging from minus 4 to minus 2 at theta 80 degrees. The data sets for axial ratio at zero degrees, 45 degrees, and  90 degrees have wider spread of initial values between less than 1 and 5 dBi at theta minus 90 degrees, and converge slightly in a range between less than 1 and about 2 dBi from theta minus 40 to theta 20 degrees, and all trend upward to range between 4 dbi and 7 dBi at theta 80 degrees. Diagram provided by MARK IV IVHS, Inc.

Figure 4-47 GPS Antenna Gain


Line plot of gain dBi, axial ratio dB over theta degrees for eight data sets labeled phi and starting at 0 and increasing in increments of 45 to 315. . All plots track closely in an approximate m-shape. Values range between minus 4 and 1 dBi at theta 10 degrees, trend upward to range between minus 1 and 5 dBi at theta 20 degrees, swing downward to range between minus 7 and minus 1 at theta 50 degrees, swing upward to range between 2 and 7 dBi at theta 70 degrees, and then drop to range between minus 3 and 2 dBi at theta 90 degrees. Diagram provided by MARK IV IVHS, Inc.

Figure 4-48 DSRC Antenna Gain

The antenna was packaged in a low profile plastic package that included room for the GPS low- noise amplifier. Power for the amplifier was passed through the RF coaxial cable. The package and cabling are shown in Figure 4-49, and the unit mounted to a POC vehicle is shown in Figure 4-50.

Photograph shows a thin black package with cabling coiled around it.

Figure 4-49 Dual DSRC/GPS Planar Antenna Package and Cabling

Photograph shows thin black package on a vehicle roof, with cabling.

Figure 4-50 Antenna Mounted on POC Vehicle

4.5.5 External Positioning Unit

The OBE uses a combination of internal and external GPS receivers. The internal receiver is part of the Eurotech DuraCOR unit and is described in Section 4.5.1.1. Because the POC applications generally require higher levels of accuracy than the DuraCOR GPS receiver can provide, a provision was made for an external unit. The external GPS receiver includes a dead reckoning sensor that is intended to track the position of the vehicle between GPS position reports by keeping track of distance and heading changes, and extrapolating a position fix from the last known fix. The dead reckoning sensor is a low-cost gyroscopic device that provides motion change information to the external GPS receiver. The output of this device is calibrated by the receiver while in GPS coverage, and used to update the vehicle location during short GPS outages. The internal GPS receiver is used primarily to provide an accurate time base for the DSRC Radio; however, it also is available for use as a back-up source of positioning information.

In the POC program, two types of external positioning units were used; a SiRFStar II unit and a U-Blox unit. Both of these systems included internal, dead reckoning sensors and the associated filtering software. These systems obtained a GPS signal from the dual DSRC/GPS antenna via a power splitter that also routed the signals to the internal GPS card in the DuraCOR unit.

The overall positioning system is shown in Figure 4-51.

Block diagram with seven elements. At the top, gyro and odometer lead to an element labeled External GPS Receiver. This element connects down to an element labeled Positioning Service. At the right, an element labeled Power Splitter connects to the External GPS Receiver to the left, and loops around to the right and down to Internal GPS Receiver, which leads to Positioning Service at the left and to the element labeled DSRC Radio above. This element is fed by an element labeled Shared Antenna, which also feeds the element Power Splitter. DSRC Radio also connects to the Positioning Service.

Figure 4-51 OBE Positioning Subsystem

The SiRFStar II unit is shown in Figure 4-52, and the two units are compared in Figure 4-53.

Photograph showing a view of the top and one side of the unit that has connection ports and a toggle switch.

Figure 4-52 SiRFStar Positioning Unit


Photograph showing the two units side by side, with cabling attached.

Figure 4-53 SiRFStar II and U-Blox Positioning Units

4.5.6 Power Management Unit

Because the vehicle power system is both noisy and subject to abrupt interruption, the vehicle integration included a power management system. The CarNetix power control module, shown in Figure 4-54 provides filtering as well as programmable timing for various power buses. This allows the system to power on and off the supply voltages for various components, in a specific and prescribed order.

Photograph of a light-colored metal box with connection ports on the end. Diagram provided by CarNetix.

Figure 4-54 CarNetix Power Management Unit

The CarNetix unit provides a USB port which allows a laptop and CarNetix client software to configure, manage and if needed, monitor the power supply. The power supply hardware configuration is controlled by jumpers internal to the CarNetix unit. Figure 4-55 shows the setup screen used to set and monitor the supply lines and their respective on/off sequencing.

Rendering of a screen that includes gauges and digital readouts for various operating parameters.

Figure 4-55 CarNetix Power Management Controls

4.6 VIIC POC Vehicle Integration

The POC included a variety of vehicles, each of which presented its own system integration challenges. For the VIIC-equipped vehicles, a standard setup was used that allowed the system to be efficiently installed in any of the vehicles. A few of VIIC car maker members also developed their own custom installations.

Complete installation of an OBE subsystem includes installing and interconnecting the following equipment in the vehicle:

The OBE subsystem assembly is composed of a main aluminum chassis that holds the multiple components that make up the assembly. This is shown in Figure 4-56. The purpose of the raised panel on which the components are mounted is to allow a convenient place to store the large cable harness. This harness was necessary since each car is slightly different in layout, and the program wanted to avoid developing specialized harnesses for each vehicle. As a result, use of the “one size fits all” cable harness often meant that there was excess cable length that was stored in the area under the OBE chassis platform. A portion of this cabling can be seen in Figure 4-57.

Photograph with eight components labeled. These include OBE, External Positioning HW, Ethernet Switch, Power Supply, GPS Antenna Splitter, Power Distribution Block, External Positioning Cables, and External Positioning Connectors.

Figure 4-56 OBE Subsystem Assembly


Photograph with two items labeled. One is Cables to Passenger Compartment. The other is Ethernet, Battery, Ignition, and HMI Power/Return Cables Added.

Figure 4-57 OBE Cabling Assembly

A typical trunk assembly is shown in Figure 4-58. This approach allowed the OBE to be installed with minimal disruption of the vehicle and allowed the OBE subassembly to be easily accessible. As an example of the complexity of the vehicle integration, the hinges shown in the figure are necessary to allow the OBE assembly to be tilted up to access the vehicle's spare tire, a necessary precaution for a vehicle traveling thousands of miles in tests on the open road.

Photograph shows OBE unit inside a vehicle trunk.

Figure 4-58 OBE Trunk Mounting

The HMI display also presented challenges. In some vehicles, it was possible to remove an existing screen or display subsystem and replace it with the HMI. In others, there was simply no room to do so.

Figures 4-59 and 4-60 show two typical HMI mounting approaches. Figure 4-59 is in a Ford Mustang, which required an external mounting approach. Figure 4-60 is a Ford Edge, which allowed a more fully integrated in-dash mount.

Photograph showing dashboard and unit mounted adjacent to the steering wheel.

Figure 4-59 HMI External Dash Mount

Photograph showing dashboard and unit integrated above control panel adjacent to steering wheel.

Figure 4-60 HMI In-Dash Mount

The dual DSRC/GPS antenna was typically mounted to the roof by magnets embedded in the antenna housing. Figure 4-61 shows the antenna mounted on the center of the vehicle roof. This placement allows the antenna to be horizontal which results in symmetrical antenna coverage front-to-back. Other installations placed the antenna closer to the rear window (thereby eliminating the long cable run across the roof). This approach resulted in asymmetrical antenna coverage at times. The effects of these differences are discussed in Final Report – Volumes 3a, 4a and 5a.

Photograph showing unit on a vehicle roof top with cabling secured to the top and along the side of the windshield.

Figure 4-61 OBE Roof Mount DSRC/GPS Antenna

4.7 POC Applications Description

4.7.1 POC Applications Overview

As previously described, the VIIC POC effort developed seven applications that used and exercised the core system functions. These applications are:

In-Vehicle Signage receives electronic advisory messages from roadside units, and, based on location and timing information, presents the message content in graphical and audible form to the driver using the OBE HMI.

Probe Data Collection gathers vehicle operating data from the Vehicle Interface and position information from the Positioning Service, and compiles a “snapshot” of the vehicle state at that time and location. The application saves snapshots in a set and then uploads the snapshot set to the network-based PDCS when the vehicle encounters an RSE.

Electronic Payments–Toll sends out an announcement from local processor via an RSE. The announcement contains toll plaza location information. When the OBE application determines it is inside the toll plaza zone, it obtains toll payment information and toll payment zone information from the local toll processor. When the vehicle enters a payment zone, the OBE application notifies the payment service which sends a payment message to the local toll processor. All messaging relating to user identity and payments is encrypted, and transactions occur at vehicle road speed.

Electronic Payments–Parking operates on the same principles as tolling, but speeds are slower and payment and plaza zones are smaller and more complex.

Traveler Information / Off-Board Navigation sends a request for a route from the current OBE location to a pre-set destination. The request is forwarded by web services system to a navigation service provider, where the route is computed including turn-by-turn directions. Directions are sent back to the OBE at the same RSE as the request is received. If the delivery of route is interrupted, for example, by the vehicle leaving the RSE zone before the route download is complete then at the next RSE encounter the process starts where it left off. The route may also be updated based on real-time traffic data collected, for example, from the probe system.

Heartbeat compiles a regular vehicle status message containing speed and position data, sending messages out at regular intervals (typically every 100 ms). The OBE also receives the same type of message from other vehicles. Primary output is a log of sent and received messages (the current application does not do any safety processing on the message). This application is primarily used to assess high message rate generation and reception.

Traffic Signal Indication is a stub application. A traffic signal controller sends a Signal Phase and Timing (SPAT) message to a local RSE at regular intervals. The RSE transmits the message, and the OBE receives it. The Traffic Signal Indication Application decodes the message and presents the current signal state and the time remaining in that state using the HMI display. This application is used to test the effectiveness of the system in handling and prioritizing safety messages while supporting lower priority operations.

4.7.2 Tolling Payments Application

The POC Payment for Toll (“Tolling”) Application allows the vehicle driver to securely make an automatic toll payment while passing through a defined tolling area.

Tolls are assessed and paid as the vehicle travels at high speed through a defined toll plaza and lane-specific charging zone. The system is operated by a Local Transaction Processor (LTP) that is in direct communication with a local RSE. The Tolling Application utilizes a network side debit account from which toll payments are deducted. The application uses digital certificates and digital signatures to encrypt all data transactions for privacy, and to authenticate all payment confirmations. These security measures guarantee that an undisputable toll amount is deducted from the correct financial account while preserving the confidentiality of the account information.

The Tolling Application is distributed over four major components: an In-Vehicle Component, a LTP Component, a VII system component and a Network Users Component (NUC).

The In-Vehicle Component contains the In-Vehicle Toll Processing (IVTP) Element and In-Vehicle Payment Service (IVPS) Element. The IVTP identifies when toll transactions occur, facilitates the transfer of account information between the driver and the vehicle, presents information to the driver, and gets selections from the driver via the vehicle HMI.

The LTP Component contains an LTP Toll Processing (LTTP) Element that defines the existence of toll zones, generates Toll-payment invoices and processes the corresponding acceptances of payment during the course of the toll payment transaction.

The NUC contains a Network Users Payment Service (NUPS) Element that verifies the toll payment.

The application operates through the VII network component that provides a Roadside Infrastructure Support Service and a Communications Service. Together, these two services support message routing and delivery, service announcements and communications support between the elements in the In-Vehicle, Local Transaction Processor and NUCs.

In addition, the Tolling Application makes use of the Communication Manager, the HMI Manager, the OBE VII Positioning Service and VII Security Service during the course of the Toll Payment Transaction.

The relationship of these components in the context of the Tolling Application is provided in Figure 4-62.

Block diagram with nine elements. The central element is labeled VII Infrastructure System and has one item highlighted: Communications Service. This feeds upward and to the left to an element labeled Local Transaction Processor, which includes Toll Processing and a connection to Toll Facility. It also feeds to the left to an element labeled On-Board Equipment, which includes Tolling Application and Payments Application, and connects downward to Vehicle Operation. It also feeds to the right to an element labeled Transaction Service Provider, which includes Account Services and connects downward to Financial Institution and Tolling Authority. Local Transaction Processor also has a two-way connection to Transaction Service Provider.

Figure 4-62 Payment for Toll Application System Overlay Diagram

4.7.2.1 POC Tolling Application Architecture

Figure 4-63 illustrates the elements of the Payment for Toll Application. The LTTP communicates to the IVTP via the RSE Radio Handler. The LTP communicates directly with the NUPS as shown, however the physical connection is routed through the Network Component for POC.

Block diagram with four elements, three on the top level and one on the bottom level. At the top left, an element labeled OBE includes In-Vehicle Tolling Component and In-Vehicle Payment Component. The central element is labeled VII network, which includes RSE with RSE Radio Handler on the left and SIDN on the right. The third element is labeled Service Provider and includes Network Payment Component. On the bottom level, the element Local Transaction Processor includes LTP Tolling Component. Both items in the element labeled OBE connect to the RSE Radio Handler, which connects across to SIDN and further to Network Payment Component, and down to LTP Tolling Component. Network Payment Component also loops via a dashed line down and across to the LTP Tolling Component.

Figure 4-63 Payment for Toll Component Diagram

In-Vehicle Component

As shown in Figure 4-64, the Tolling Application is comprised of the IVPS Element and the IVTP Element. The IVTP Element uses the payment services provided by the IVPS Element during the course of a toll payment transaction.

Block diagram with eight elements in four rows. The topmost element is labeled In-Vehicle Payment Service Element and includes Invoice Processing and Logging. It connects to the right to the element in the second row labeled Logging Service, and also has a two-way connection to down to the third element labeled In-Vehicle Tolling Processing Element, which include Zone Processing, Transaction Processing, Communications Interface, Presentation Interface, and Logging. This element has a connection to the element labeled Logging Service, as well as down to elements labeled Positioning Service, HMI Manager, Security Service, and Communications Manager. A connection via dashed line is shown between the element labeled Communications Manager and the final element labeled VII System.

Figure 4-64 In-Vehicle Component Overview

In-Vehicle Payment Service Element

The IVPS contains account identification information for multiple accounts and provides an interface for the IVTP to either request account identification information, or exchange invoice and receipt data. The IVPS also has the ability to digitally sign invoices using the OBE Security Services.

In-Vehicle Toll Processing Element

The IVTP controls the toll transaction process inside the vehicle. To accomplish this, the IVTP determines when the vehicle is inside of the Toll Plaza Zone(s) and Toll Collection Zone(s) using the VII Positioning Service and the geographic information provided in the LTTP messages.

At each geometric crossing point, the IVTP initiates the next stage in the process by sending messages to the LTTP and by activating the IVPS (to start the actual payment process). The IVPS and IVTP are separate to allow for different types of payment methods without changing the tolling transaction logic.

In-Vehicle Toll Processing Presentation Management

The Presentation Management notifies the driver using visual displays and audible tones (Via the OBE HMI Manager) to indicate when a toll has been paid. The typical screen used to indicate a toll payment is shown in Figure 4-65.

Rendering of a screen showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Toll selected, an information line indicating amount of toll billed, and a button designated OK. Rendering provided by Delphi

Figure 4-65 Toll Payment Screen

LTP Component

Shown in Figure 4-66, the LTP Component contains the LTTP. The LTTP provides service announcement and toll zone information to the IVTP(s) in vehicles that are in the coverage area of the RSE that the LTP is connected to. The LTTP generates invoices for vehicles passing through the toll zone, processes signed toll payment invoices, and provides transaction summary information to the NUPS.

The LTP is located adjacent to the RSE to minimize the effect of network latency. For low latency applications such as tolling, this approach avoids latency caused by the need to traverse the network to a remote site.

Block diagram with five elements. The central element is labeled Local Transaction Processor Tolling Element, and includes several subroutines for announcements, invoice generation, payment processing and management, receipt handling, and communications. This element is connected to elements labeled RSE, NUPS, Logging Service, and Security Service.

Figure 4-66 LTP Component Overview

Network User Component

The NUC of the Tolling Application, shown in Figure 4-67, is solely comprised of the NUPS Element. The NUPS authenticates secure connections from the LTTP, validates the digital signatures of signed toll payment invoices, creates toll payment receipts and summaries, and updates information in the User Account database.

Block diagram with three elements. The central element is labeled Network User Payment Service Element, and includes Process Transaction Summaries, Update Debit Account Information from User Account, Generate Final Transaction Summary, and Logging. This element connects to the left to the element labeled Local Transaction Processor, and down to User Account Database.

Figure 4-67 NUC Overview

4.7.2.2 Tolling Application Flow of Events

The general flow of events for the Tolling Application is briefly described in this section (for a detailed description, see the VIIC's Tolling System Functional and Performance Requirements APP 110-04).

Roadside Setup

  1. The RSE Radio Handler announces the Toll Service in the WSA broadcast by the RSE. The PSC field of the WSA includes coordinates defining the “Toll Plaza Geometry” (the region inside which the vehicle should join the service and prepare to pay the toll).
  2. The LTTP Element periodically creates a “Toll Zone Definition” message to be broadcast by the RSE on the SCH.
  3. The LTTP element establishes a secure Transport Layer Security (TLS) session with the NUPS.

In Vehicle Setup

  1. The IVTP Element registers for the Toll Service. Note that two registrations are required, one for the WSMP service and one for the IP service.

Operational Flow

  1. The Vehicle approaches and enters the RSE coverage area, but has not yet entered the toll plaza area.
  2. The IVTP Element receives the WSA, which includes the “Toll Plaza Geometry,” indicating service is available and that the service has been joined.
  3. The IVTP Element sends the Toll Plaza geometry to the OBE's Positioning Service.
  4. The IVTP Element receives the Toll Zone Definition.
  5. Time passes, vehicle moves…
  6. The OBE Positioning Service notifies the IVTP Element when the Vehicle has entered the Toll Plaza geometry.
  7. The IVTP Element sends a “Vehicle In-Plaza Zone” notification to the LTTP Element as an encrypted message.
  8. The IVTP Element sends the Collection Zone(s) geometry from the Zone Definition message to the OBE's Positioning Service.
  9. In response to the Request for Toll Invoice (Vehicle In-Plaza Zone) message, the LTTP Element sends a Session Acknowledgement to the IVTP Element.
  10. Time passes, vehicle moves…
  11. The vehicle reaches/enters a Toll Collection Zone (Defined in the original Zone Definition message).
  12. The OBE Positioning Service notifies the IVTP Element that the Vehicle has reached a Toll Collection Zone.
  13. The IVTP Element sends a “Vehicle In Collection Zone” notification containing the Vehicle's lane position and classification to the LTTP Element.
  14. In response to the message, the LTTP Element sends a Toll Invoice to the IVTP Element.
  15. The IVTP Element extracts the E-Payment Invoice from the Toll Invoice and passes the latter to the IVPS Element.
  16. The IVPS Element signs the invoice using the WAVE Security service and returns it to the IVTP Element.
  17. The IVTP Element incorporates the signed E-Payment Invoice into a Signed Toll Invoice message and sends it to the LTTP Element.
  18. The LTTP Element sends an Invoice Confirmation message to the IVTP Element acknowledging the collection of the fee (for POC test purposes).
  19. The IVTP Element causes a tone to be sounded on the Vehicle HMI and an indication to be displayed on the Vehicle HMI when the toll collection point is passed (for POC test purposes).
  20. The IVTP Element removes all records of the transaction from the vehicle memory.
  21. The Vehicle continues on its way.
  22. The LTTP Element makes a record of the transaction and includes the signed E-Payment Invoice into a Toll Transaction Summary.
  23. The LTTP Element regularly sends a Toll Transaction Summary to the NUPS Element for processing.
  24. The NUPS Element extracts each signed E-Payment Invoice from the Toll Transaction Summary and verifies the signature using the WAVE Security Service.
  25. The NUPS Element deducts the toll from the Driver's pre-paid debit account.
  26. For each processed signed E-Payment Invoice, the NUPS creates an E-Payment Receipt and signs this using WAVE Security Service.
  27. The NUPS includes the signed E-Payment Receipt into a Toll Receipt Summary message.
  28. The NUPS regularly sends a Toll Receipt Summary to the LTTP Element.
  29. The LTTP Element extracts each signed E-Payment Receipt from the Toll Receipt Summary and verifies the signature using the WAVE Security Service.
  30. The LTTP Element correlates each E-payment Receipt with the Toll Invoice previously recorded.

4.7.3 Parking Payment Application

The Parking Payment Application allows the vehicle driver to securely make a payment for parking privileges. Parking fees associated with that vehicle within a parking lot are charged from the user's credit card account. In the POC implementation, the driver pays a fixed-rate event-style parking fee, charged at a standard rate on entry to the parking facility. The driver interacts with the application by way of the generic HMI developed for all POC applications.

The Parking Payment Application uses a pre-established credit card service from a VIIC surrogate financial institution to allow purchases made through the VII infrastructure. The application uses digital certificates and digital signatures to authorize payments and verify payment confirmations. All transaction data exchanged is encrypted. These security measures guarantee that an undisputable parking fee amount is charged to the correct credit card account while preserving the confidentiality of the user's account information.

The Parking Payment Application is constructed using essentially the same components used for the Tolling Payment Application. The substantive difference between the two applications is that the Parking Payment Application involves somewhat finer precision in the definition of the payment zones to allow for entry and exit regions located close to one another, and the transaction involves a few user interface screens to accept the payment. Architecturally however, the two applications are the same, and for brevity, we have not repeated the detailed description here.

The Parking Payment screens are shown in Figures 4-38 to 4-40.

4.7.4 Probe Data Collection Application

Figure 4-68 shows how the Vehicle Probe Data Generation Application fits into the VII POC architecture. In this figure the Probe Data Service (PDS), by way of a probe data proxy located at the RSE, announces that it is collecting probe data at that RSE. When the vehicle encounters an RSE making such an announcement, the PDC Application in the OBE then sends a package of “snapshots” of vehicle operation data (a suite of parameters) collected at regular intervals over a time preceding the encounter with the RSE to the Probe Data Proxy application in the RSE. The Probe Data Proxy application then passes this data on to the PDS at the SDN.

Block diagram with five elements. The central element is labeled VII Infrastructure System and has two items called out: Communications Service and Probe Data Service. An element on the left labeled On-Board Equipment includes Probe Data Application and is connected down to an Element labeled Vehicle Systems. An element on the right is labeled Probe Data Subscriber, includes Road Data Display and Management Tools is connected down to an element labeled Road Authority. Communications Service in the central element connects to Probe Data Application in the element on the left, to Probe Data Service in the central element, and to Road Data Display and Management Tools in the element on the right.

Figure 4-68 Vehicle Probe Data Generation Application System Overlay Diagram

The PDS at the SDN is composed of two components, the PDCS and the Probe Data Subscription Service (PDSS). The PDCS receives the data collected by the proxy applications at all RSEs and separates the various parameters into what are known as “topics” These topics are published using a conventional publish and subscribe architecture. The topics are essentially each type of collected parameter across the entire system. Not all vehicles will deliver all topics, but since the PDS aggregates data from many RSEs and many vehicles, most topics will be populated with data. The Probe Data Subscriber can then subscribe to any or all topics based on multiple specified criteria. For example, a subscriber might subscribe to speed at various geographic locations, or to windshield wiper state inside the boundary of a particular county. As data conforming to a subscriber's criteria arrive, at the PDS they are immediately forwarded to that subscriber.

The system operates by collecting “snapshots” of vehicle operating parameters. A number of these snapshots are typically assigned a Probe Sequence Number (PSN) so that they can be correlated as corresponding to a single vehicle when used by probe data subscribers. To avoid the obvious privacy concerns associated with this approach, the data collection policies include a required gap between PSN groups where no data is collected. This effectively separates snapshots of one PSN from those of another and makes it difficult to link the snapshots and thereby track the behavior of any single vehicle.

The system has no memory. This means that once data has been forwarded to all subscribers, it is deleted. This aids in scalability (as the system is expected to process enormous volumes of data) and it also avoids issues associated with public maintenance of data. The data collected is also anonymous. The messages are anonymously signed to assure that the sender is legitimate, and they are locally encrypted to avoid issues with radio eavesdropping, but no message contains any identifying information that might be used to link the data to a particular vehicle.

4.7.4.1 POC Probe Data Application Architecture

This section addresses the Probe Data Vehicle Component (PDVC) of the Probe Data Application. A detailed description of the PDS at the SDN is provided in Section 4.10.

The PDVC consists of five functional elements illustrated in Figure 4-69. The five functional elements are: Snapshot Generation, Buffer Management, Snapshot Transmission, Log Management and Probe Management Directives. These elements operate together to collect data from vehicle systems, compile messages and send the messages under specific conditions to a probe data proxy application at an RSE.

Block diagram with eight elements. The central element is Probe Data Vehicle Component with several subroutines and includes Buffer Management. Elements labeled Positioning Service, Security Service, Vehicle Interface Service, Power Management Service, Logging Service, and Communications Manager. Communications Manager has a connection via dashed line to an element labeled VII System.

Figure 4-69 PDVC Functional Elements Overview

Snapshot Generation

Snapshot Generation combines data obtained from the VIS and positioning data from the Positioning Service with periodic, event-based or start/stop probe data snapshots based on a programmable data generation policy defining data collection rate, content, etc. The generation policy can be changed via directives from Probe Data Management Directives. The snapshots are then passed to the Buffer Management.

Under normal operation, Snapshot Generation compiles snapshots at intervals based on the vehicle's speed. Where defined by the policy, Snapshot Generation also compiles messages based on specific events in the vehicle such as the activation of traction control measures, braking threshold events, etc. This approach allows the collection of unique events that may have relevance for road maintenance, weather assessment and similar applications.

Buffer Management

Buffer Management receives the snapshots from Snapshot Generation and manages the data store of these snapshots via a configurable data replacement policy. This policy is used to define, for example, how long a snapshot should remain unsent in the buffer before it is deleted, how long a gap between sets of snapshots taken under a given sequence number should be, and other criteria.

Buffer Management also supports key privacy policies that can have an important impact on the collection of probe data. For example, to avoid the ability to track a vehicle from one RSE to the next, the policy prohibits sending snapshots with the same PSN at two different RSEs. This protects the vehicle's privacy, but it also means that any data that is not sent in an RSE encounter is lost. For example, if the radio link is lost due to range or interference, the remaining data must be deleted.

VII-Datagram Transport Layer Security (V-DTLS)

Because Probe Data Snapshots contain information about the behavior of the vehicle at locations other than where the data is uploaded to the system, it is important to protect the data from local eavesdropping. This approach prevents, for example, a police officer from intercepting probe data indicating that the vehicle was speeding at some earlier location, linking the anonymous data to the vehicle through observation, and thereby issuing a citation. While this practice might prove to be illegal, encryption of the local radio link was seen as a sure way to avoid such issues from the start, so the concept of a locally encrypted link was introduced.

V-DTLS builds on the well-known Datagram Transport Layer Security (DTLS) system used in the Internet. Upon entering the radio coverage zone of an RSE, The Probe Data Application receives a unique identifier from the RSE, for example, the IP address of the RSE. The Probe Data Application then uses this identity to encrypt the Probe Data Message, and signs it using the OBE Application's (IEEE P1609.2) anonymous certificate. Since the IP address of the RSE is always sent in the WSA, this approach requires no additional steps in the setup of the secure link. The key to this approach is that the RSE has developed a private key that reverses the encryption done by the OBE using the RSE identity as a key. This method is not as secure as full blown asymmetric keys, but it is highly efficient as it avoids the time required for a complete secure key exchange, and it requires no compromise of the OBE's anonymity. For purposes of preventing local eavesdropping on data exchanges that will, eventually become public, this approach was seen as a good compromise between security and efficiency.

4.7.4.2 Probe Data Application Flow of Events

Preconditions

  1. The PDVC has registered with the OCM to receive service notifications when PDCS are advertised from an RSE.
  2. At least one RSE is set up with a Probe Data Proxy Application and is announcing the PDCS.
  3. The PDS is active at the SDN.
  4. A Network User has subscribed to one or more of the parameters collected in the vehicle through the PDS.

Flow of Events

  1. As the vehicle travels over a distance of 2 km the PDVC Application collects various operating parameters from the vehicle via the VIS, and position and time from the Positioning Service, and compiles these into stored Probe Data Snapshots that share a common PSN.
  2. The vehicle approaches the coverage zone of an RSE that is announcing PDCS.
  3. The Communications Manager coordinates with the V-DTLS client on the RSE to set up a secure, anonymous communications session.
  4. The Communications Manager notifies the PDVC Application that the PDCS is available.
  5. The PDVC Application combines the stored snapshots sharing the common PSN into a series of Probe Data Messages.
  6. The PDVC Application passes the Probe Data Messages to the V-DTLS element on the OBE for encryption.
  7. The V-DTLS element encrypts the messages and transmits them to the RSE using the OBE DSRC Radio.
  8. If the PDVC Application does not complete sending all of the stored snapshots with the same PSN, the remaining snapshots with that PSN are deleted.
  9. The V-DTLS client on the RSE receives the Probe Data Messages, decrypts them and passes them to the PDC Proxy application.
  10. The PDC Proxy application passes the Probe Data Messages through the network to the PDS at the SDN.
  11. The PDS parses the probe data messages and passes each vehicle parameter to the Publish-And-Subscribe element of the PDS.
  12. The Publish and Subscribe element of the PDS sends messages to the Network user containing only those parameters that they subscribed to.

4.7.5 In-Vehicle Signage Application

The POC In-Vehicle Signage Application is designed to provide broadcast advisory messages to the vehicle driver based upon location and situation relevant information. Messages are prioritized both for delivery and presentation based on the type of advisory. These messages may be in the form of text, graphics, or audio cues presented by the generic vehicle HMI developed for all POC applications.

As shown in Figure 4-70, the In-Vehicle Signage Application is composed of two major components: the Network User-based application component, referred to as the Signage NUC that generates advisory messages, and the Vehicle-based application component referred to as the Vehicle Signage Component (VSC) that presents advisory messages to the driver.

Two NUCs were used in the POC. The Traveler Information NUC that generates traffic and incident information and the Signage NUC that generates Next Exit and Work Zone advisory messages. These advisory messages are utilized by the VSC to inform the driver of current traffic conditions.

In addition to the two application components, the infrastructure system supports the Signage Application with three subsystems. The AMDS located at the SDN receives messages submitted by the NUCs and, based on the delivery instructions for the message, distributes the advisory messages to the appropriate RSEs. An AMDS Proxy running on each RSE accepts the messages from the AMDS and causes them to be broadcast by the RSE according to the delivery parameters for the message. The NUC also obtains location information about the RSEs from the ILS located at the SDN.

Block diagram with six elements. The central element is labeled VII Infrastructure System and has three items called out: Communications Service, Advisory Message Distribution Service, and Information Lookup Data Service. An element on the left labeled On-Board Equipment includes Vehicle Signage Component and is connected down to an Element labeled Vehicle Operator. An element on the right is labeled Advisory Provider and includes Signage Network User Component and is connected down to elements labeled Other Data Sources and External Operator. Communications Service in the central element connects to Vehicle Signage Component in the element on the left, to Advisory Message Distribution Service in the central element, and to Signage Network User Component in the element on the right. Information Lookup Service in the central element has a two-way connection to Signage Network User Component in the element on the right.

Figure 4-70 Vehicle Probe Data Generation Application System Overlay Diagram

4.7.5.1 POC Signage Application Architecture

The overall POC architecture of the Signage Application is shown in Figure 4-71. Advisory messages (signs) originate with a Network user. These signs are generated from data derived from a variety of sources including the VII Probe Data system. They messages are compiled locally by the Network user using the Society of Automotive Engineers (SAE) J2735 standard format. These messages are then submitted to the AMDS along with delivery instructions. The delivery instructions indicate what RSEs the messages should be broadcast from, the priority of the message, details about the frequency of broadcasts and the duration for which the message should be broadcast. The AMDS then distributes the messages to the appropriate RSEs and the RSEs broadcast the messages in their local region according to the delivery instructions.

When the message is received by the VSC the specific display details (such as when and where the advisory message should be displayed) are extracted and the message is displayed when the vehicle situation meets the display conditions.

Block diagram with three elements in one row. The left element is labeled OBE and includes In-Vehicle Signage Component. The central element is labeled VII Network and includes multiple items labeled RSE on the left, and Advisory Message Distribution Service and Information Lookup Service on the right. The right element is labeled Service Provider and includes Network Signage Component and Traveler Information Component. In-Vehicle Signage Component in the left element connects to RSE in the central element; all RSE items connect to Advisory Message Distribution Service in the central element and to Network Signage Component and Traveler Information Component in the element on the right. Information Lookup Service in the central element also connects to Network Signage Component and Traveler Information Component in the element on the right.

Figure 4-71 POC Signage Application Architecture

4.7.5.1.1 Signage Network Component

As illustrated in Figure 4-72, the Network Signage Component consists of 3 functional elements: Communications, Advisory Message Generation and Map Data Store. The remaining sections detail the functional and external interface requirements for each of the functional elements.

Block diagram with four elements. The central element is Network Signage Component and includes Advisory Message Generation, Communications, and Map Data Store. Two elements on the right labeled Advisory Message Distribution Service and Information Lookup Service connect to the central element. One element on the right labeled Network User Operator connects to the central element.

Figure 4-72 Network Signage Component Functional Elements Overview

Communications

Communications authenticates the Network Signage Component with the AMDS. It receives advisory messages from the Advisory Message Generator and forwards them to the AMDS.

Advisory Message Generation

Advisory Message Generation creates advisory messages and authenticates the Network Signage Component with the ILS. It uses RSE location information received from ILS queries to create the delivery instructions for the messages. The advisory messages and their delivery instructions are forwarded to Communications.

Map Data Store

The Map Data Store is a local table of information obtained from the ILS at the SDN. It contains the addresses for RSEs corresponding to specific geographic locations, and is used to allow the Network User Operator (signage provider) to geographically target messages simply by using network addresses.

4.7.5.1.2 Vehicle Signage Vehicle Component

The VSC consists of four functional elements as illustrated in Figure 4-73. The four functional elements are: Advisory Message Management, Next Advisory Message Determination, Presentation Management, and Log Management. The remaining sections detail the functional and external interface requirements for each of the functional elements.

 Block diagram with seven elements. The Central element is labeled Vehicle Signage Component and include Advisory Message Management, Next Advisory  Message Determination, Presentation Management, and Logging. Two elements on the left labeled HMI Manager and Positioning Service connect to the central element. Two element below labeled Logging Service and Power Management Service connect to the central element. One element on the right labeled Communications Manager connects to the central element, and also to an element on the right labeled VII System via a dashed line.

Figure 4-73 Vehicle Signage Component Functional Elements Overview

Advisory Message Management

The Advisory Message Management receives advisory messages from various NUCs, verifies that the advisory messages are not expired, duplicated or unsupported and stores valid advisory messages in the Advisory Message data store. It is also responsible for managing the Advisory Message data store by removing expired messages and determining what messages need to be replaced when the data store is full. Finally, Advisory Message Management will receive OBE shutdown notifications from the Power Management Service. This notification will be forwarded to the other functional elements to allow the VSC to shut down gracefully.

Next Advisory Message Determination

The Next Advisory Message Determination interfaces with the Positioning Service and uses the vehicle's positioning information in conjunction with the location parameters of the advisory messages to determine when an advisory message should be presented to the driver. The advisory messages that are ready for presentation are forwarded to the Presentation Manager. Next Advisory Message Determination is also responsible for notifying the Presentation Manager that an advisory message no longer needs to be presented due to a change in the vehicle's position.

Presentation Management

The Presentation Management manages the presentation of advisory messages to the driver via the HMI Manager. Typical Signage displays are shown in Figures 4-74 and 4-75.

Rendering of a screen showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Sign selected, a warning icon and information lines indicating speed limit and lane status. Rendering provided by Delphi

Figure 4-74 Example Road Advisory Display


Rendering of a screen showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Sign selected, and information lines indicating icons for food services, directional arrows, and distances. Rendering provided by Delphi

Figure 4-75 Example Next Exit Services Display

4.7.5.2 In-Vehicle Signage Application Flow of Events

This section describes the end-to-end flow of events of the application which consists of generating Signage and Advisory Messages, disseminating Geographically Focused Signage and Advisory Message Information, and presenting Signage Information.

Preconditions

  1. NUC has registered with the VII system to broadcast messages. There are two NUCs available for POC: Network Signage and Traveler Information.
  2. All available NUCs have been authenticated to use the ILS.
  3. A VSC has registered with the OCM to receive advisory messages.
  4. The VSC has received a Service Available event from the OCM.

Flow of Events

  1. A Network user creates a new advisory message at a NUC.
  2. The Network user provides delivery instructions for each advisory message based on identifiers for RSEs within the geographic area where the messages are to be disseminated.
  3. The NUC sends the advisory message with delivery instructions to the AMDS at the SDN.
  4. The NUC receives an acknowledgment as to the ability of the AMDS to forward the message to the identified RSEs.
  5. Each RSE sets up its transmission playlist according to the advisory message delivery instructions.
  6. Each RSE transmits announcements and messages on the appropriate DSRC channel according to the RSE operations and the advisory message delivery instructions.
  7. The vehicle containing the VSC moves into range of an RSE.
  8. The VSC receives the advisory messages from the RSE.
  9. The VSC assesses the message for relevance.
    1. The VSC verifies that the advisory message is not a duplicate using the Packet ID and the issue time of the advisory message.
    2. Duplicate advisory messages are discarded.
    3. The VSC discards advisory messages with unknown or unsupported Packet IDs.
  10. The VSC manages the data store of advisory messages.
    1. The VSC adds relevant advisory messages to its advisory message data store.
    2. The VSC updates relevant advisory messages in its Advisory Message data store based on updated information received from the NUCs.
    3. The VSC deletes expired advisory messages from its data store.
    4. The VSC determines what advisory messages are to be deleted and deletes them to make room for new messages. Messages are logged before they are deleted
  11. The VSC uses the vehicle's positioning data and the advisory message presentation location information to determine when a message needs to be presented to the driver.
  12. 12. The VSC forwards advisory messages that need to be presented to the driver via the HMI Manager.
  13. The HMI Manager presents the advisory message to the driver.
  14. The VSC cancels advisory messages that are being presented to driver when they are no longer applicable.

4.7.6 Trip-Path Application

The Trip-Path Application is intended to collect information about how vehicles move and use the road network. By knowing where vehicles start and end their journeys and what roads they use to get from A to B, road management authorities can better understand the nature of road demand and better plan for changes, new additions and improvements. Because of privacy considerations, Trip-Path is an “opt-in” application, meaning that only those users who choose to participate have the application operating in their vehicle.

Figure 4-76 shows how the Vehicle TPGA fits into the VII POC architecture.

In a manner similar to PDC, the Trip-Path client in the OBE collects and saves location information at various intervals as the vehicle moves. When the vehicle completes a trip, the entire trip is saved, thereby capturing the origin, route and destination information. On the next run cycle, the trip data is uploaded to a Network user application that simply captures and stores the trip information. In the POC, no effort was made to analyze or use the Trip-Path data.

Block diagram with five elements. The central element is labeled VII Infrastructure, with Communications Service highlighted. An element on the left labeled On-Board Equipment includes Trip-Path Application and connects downward to Vehicle Systems. An element on the right labeled Transaction Service Provider includes Trip Data Aggregation and Analysis and connects down to Road Authority. Communications Service in the central element connects to Trip-Path Application in the element on the left and to Trip Data Aggregation and Analysis in the element on the right.

Figure 4-76 Trip-Path Application System Overlay Diagram

4.7.6.1 POC Trip-Path Application Architecture

As illustrated in Figure 4-77, the Trip-Path General Application (TPGA) consists of four functional elements: Trip-Path Collection, Buffer Management, Trip-Path Transmission (TPT), and Log Management.

Block diagram with seven elements. The central element is labeled Vehicle Trip-Path Generation Application and includes Trip-Path Data Collection, Buffer Management, Trip-Path Data Transmission, and Logging. Two elements on the left labeled Positioning Service and Security Service connect to the central element. Two element at the bottom labeled Logging Service and Power Management Service connect up to the central element. An element on the right labeled Communications Manager connects to the central element and also via a dashed line to an element on the right labeled VII System.

Figure 4-77 Vehicle Trip-Path Generation Functional Elements Overview

Trip-Path Collection

Trip-Path Collection records data associated with two different types of events:

A new trip starts when either of these conditions has been met. In other words, the vehicle may collect multiple trips between ignition off and ignition on, and each trip can have no more than 4000 Trip-Path data points with the duration between the first and last Trip-Path data points not exceeding 6 hours.

The Trip-Path information is recorded as data points are generated using a configurable time and/or distance intervals. To preserve user privacy, Trip-Path Collection begins recording Trip-Path data points only after the vehicle has traversed a configurable distance from the vehicle's ignition on location. This eliminates the potential for the Trip-Path data to indicate a common location where the vehicle starts every trip (e.g. home).

Buffer Management

Buffer Management manages the data store of trips, and when it has a trip to deliver to the Trip- Path Data Accumulator Application during an RSE encounter, it then delivers the data to Trip-Path Transmission.

Trip-Path Transmission

Trip-Path Transmission (TPT) sends Trip-Path data to the Network Trip-Path Data Accumulator (NTPDA) Application via the OCM.

When TPT receives notification from the Communications Manager, it exchanges handshake messages with the NTPDA and sets up a secure session. It then sends the data in segments. To assure data transmission integrity, the NTPDA and TPT exchange acknowledgements before sending the next segment, so if data is lost or garbled due to the RF connection, the TPT resends it.

TPT deletes all information associated with a trip from the Buffer Management data store upon receipt of an acknowledgement of a successful transmission of the trip message from the NTPDA.

4.7.6.2 Trip-Path Application Flow of Events

This section describes the end-to-end flow of events of the application which consists of Generating Trip-Path records and uploading them to the Trip-Path Network Component.

Preconditions

  1. The TPGA has registered with the OCM to receive service notifications when Trip-Path Collection services are advertised from an RSE.
  2. At least one RSE is set up to announce Trip-Path Collection Services.
  3. The conditions for a new trip have been met (path traveled from ignition off to ignition on with a duration of no more than six (6) hours, or a maximum of 4000 Trip-Path data points have been taken).

Flow of Events

  1. When the vehicle starts, the TPGA begins collecting location points from the OBE Positioning Service.
  2. The TPGA discards position data points until the distance from the start of the trip to the current position exceeds 2 km. Once this threshold is reached, Trip-Path Generation begins to record the position data points at regular intervals.
  3. The vehicle travels over the course of the trip, and reaches its destination.
  4. The vehicle ignition is turned off.
  5. The vehicle ignition is turned on at some later time.
  6. On key-on, the TPGA begins a new trip log. It examines the records of the prior trip log and discards records corresponding to the 2 km immediately preceding the end of the trip (arrival at the key-off destination).
  7. The vehicle travels over the course of the new trip and encounters an RSE that is advertising Trip-Path Collection Services.
  8. The Communications Manager notifies the TPGA that the Trip-Path Collection Service is available.
  9. The Vehicle TPGA communicates with the Network Trip-Path Component to set up a secure data exchange.
  10. The Vehicle TPGA sends a collected Trip-Path data record.
  11. The Network Trip-Path Component acknowledges receipt of the data record
  12. Steps 10 and 11 continue until all data has been sent.
  13. The Vehicle TPGA deletes all sent Trip-Path records.

4.7.7 Off-Board Navigation Application

The POC OBNA provides turn-by-turn navigation cues to the vehicle driver with updates based upon location and situation-relevant information while the vehicle is en route. These cues are in the form of text or graphics such as driving instructions displayed on the HMI screen and/or audio cues from the Vehicle HMI.

The routes calculated by the OBNA can be enhanced by link travel time information collected by the VII Probe Data system and other external sources. One objective of the application is to determine the route with the shortest travel time for a designated vehicle. As the vehicle travels, updates to the route are provided through the VII system. If circumstances indicate that an alternate route will give the driver a shorter travel time, then the recommended route is changed to the newly determined route with shorter travel time.

The overall system context for the OBNA is shown in Figure 4-78. This figure illustrates that a route request originates at the OBE, and is communicated through the TSM (See Section 4.7.7.1) to the navigation service provider.

The OBNA is used to test, among other system functions, the ability of the system to route messages through the system to a remote Network user, and to maintain a service transaction session between the OBE and the Network user as the vehicle moves between RSEs. This allows the system to provide very long and detailed route information that might, under certain circumstances, be impossible to communicate in a single session as the vehicle moves through an RSE coverage zone.

Block diagram with six elements. The central element is labeled VII Infrastructure System and has Communications Service highlighted. An element on the left labeled On-Board Equipment includes Off-Board Navigation Application and connects down to an element labeled Vehicle Operator. An element on the right is labeled Transaction Service Provider and includes Transaction Service Manager and Navigation Service, which have a two-way connection. This element also connects down to two elements labeled Other Road Data and Service Operator. Communications Service in the central element connects to Off-Board Navigation Element in the element on the left and Transaction Service Manager in the element on the right.

Figure 4-78 OBNA System Overlay Diagram

4.7.7.1 POC OBNA Architecture

As shown in Figure 4-79, the OBNA is composed of two major application components: a Network Component that manages the transactions and calculates navigation routes for all requesting vehicles, and a Vehicle Component that resides on the OBE to allow the driver to request a route and to display the results of the request (the route) to the driver.

Block diagram with three elements. The central element is labeled VII Network and includes multiple items labeled RSE on the left and SDN on the right. An element on the right labeled OBE includes Off-Board Navigation Application. An element on the right labeled Service Provider includes Transaction Service Manager, Network Navigation Application, and Dynamic Traffic Information Server. Off-Board Navigation Application in the element on the left connect to RSE in the central element. All RSE items connect to SDN in the central element, which connects to Transaction Service Manager in the element on the right. Transaction Service Manager connects to Network Navigation Application, which connects to Dynamic Traffic Information Server. Transaction Service Manager has a connection via dashed line through Alternate Web Services Interface to Dynamic Traffic Information Server.

Figure 4-79 OBNA Functional Component Diagram

In the POC, the TSM was located at the SDN (See Section 4.10), and the Navigation Service components were located at the Navteq facility in Chicago. Communications from the TSM to the Navteq system were via the Internet.

The TSM was described in detail in Section 4.5.3.2.2. The following sections describe the Network Component and the Vehicle Component.

4.7.7.1.1 Off-Board Navigation Vehicle Component

As illustrated in Figure 4-80, the OBNA Vehicle Component consists of five (5) functional elements: Route Data Management, Next Maneuver Determination, Presentation Management, Off-Route Detection and Log Management; however, Off-Route Detection was not implemented in the POC.

Block diagram with seven elements. The central element is labeled Off-Board Navigation Vehicle Component and includes Route Data Management, Next Maneuver Determination, Presentation Management, Off Route Detection, and Logging. Two items to the left labeled Positioning Service and Security Service connect to the central element. Two elements below labeled Logging Service and Power Management Service connect to the central element. One element to the right labeled Communications Manager connects to the central element, and also via a dashed line to an element labeled VII System.

Figure 4-80 Vehicle Component Functional Elements

Presentation Management Element

Presentation Management provides the software interface between the OBNA Vehicle component and the HMI Manager. It is used to present available destination options to the driver and to obtain the driver's destination selection. It also provides data relating to the results of route search received from the Network Component. These results may be in the form of turn-by-turn directions or in the form of a route overview map. The Presentation Management screens (via the HMI Manager) also allow the user to scroll through multi-page route files.

Route Data Management Element

Route Data Management sends route requests to, and receives route responses from the OBNA Network Component. It extracts turn by turn maneuver information from the route responses and stores the instructions in the Route Data store. Because setting destinations represents a rather complex user interface problem (one that was not the focus of the POC program) the OBNA Vehicle Component is based on a set of pre-stored destinations. This eliminates the need for a specialized user interface to allow the user to enter a specific destination. It is assumed that a production implementation would address this issue in whatever way the developer saw fit.

Route Data Management thus supports a maximum of ten (10) predefined destinations that can be selected by the driver from a list presented on the HMI display. The predefined destinations are updated via the OBE interface to a USB External Memory Device.

In operation, Route Management sends predefined destination information to Presentation Management for display on the vehicle HMI. An example of the destination display is shown in Figure 4-81.

Rendering of a screen showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Nav selected, and under Destination Selection a list of numbered destinations is provided, with scroll buttons to the right. Rendering provided by Delphi

Figure 4-81 Example Destination List

When the driver selects a route from the list, Route Management receives the destination information from Presentation Management, and combines it with the vehicle's location at the time of request to form a Route Request. Route Management then sends this route request to the OBNA Network Component via the OCM at the first interaction with an RSE that supports communications with the TSM.

When Route Data Management receives Route Response Maneuvers from the OBNA Network Component via the OCM over one or more RSE encounters, it stores all route maps and maneuver information and assembles a complete route information set. At this time, it provides the Route Response Preview to the Presentation Management for display on the HMI. A typical Route Response Preview screen is shown in Figure 4-82.

Rendering of a screen showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Nav selected, and a local map display with buttons labeled Update Route, Cancel Route, and Turn List. Rendering provided by Delphi

Figure 4-82 Route Overview Screen

Next Maneuver Determination Element

When the Presentation Manager is displaying the maneuver list, the Next Maneuver Determination uses the vehicle's positioning information to determine when the next maneuver on the route list should be highlighted. When the vehicle is close to the maneuver point, the Next Maneuver Determination passes a maneuver display to Presentation Management for display. So, as the vehicle moves along the route the next maneuver in the route list is presented to the driver. An example Maneuver diagram is provided in Figure 4-83.

Rendering of a screen showing top buttons labeled Sign, Nav, Toll, Gas, Park, with Nav selected, and a local map display with information denoting a turn and distance, and buttons labeled Update Route, Cancel Route, and Turn List. Rendering provided by Delphi

Figure 4-83 Maneuver Diagram Display

Log Management Element

Log Management is used to log events and operations of the OBNA Vehicle Component for diagnostic and testing purposes.

4.7.7.1.2 Off-Board Navigation Network Component

As illustrated in Figure 4-84, the OBNA Network Component consists of seven (7) functional elements: Communications, Geo-Coding, Route Calculation, Direction Generation, Map Image Generation, Map Database Management and Log Management.

Block diagram with eight elements. The central element is labeled Off-Board Navigation Network Component and includes Geo-Coding, Route Generation, Direction Generation, Map Image Generation, Communications, Map Database Management, and Logging. Two elements to the left labeled Positioning Service and Security Service connect to the central element. Two elements at the bottom labeled Logging Service and Power Management Service connect to the central element. One element to the right labeled Transaction Services Manager connects to the central element and down to the element External Road Information Sources. Transaction Services Manager connects via a dashed line to an element labeled VII System.

Figure 4-84 Off-Board Navigation Network Component Functional Elements

Communications Element

Communications receives route requests from the OBNA Vehicle Component, and sends a route response, using information provided by other Network Component functional elements, to the OBNA Vehicle Component in return. All communication with the Vehicle Component is done via the TSM. Route requests are forwarded to the Geo-Coding element for further processing.

Communications also receives dynamic traffic information from external sources. This information may come from other services orchestrated by the TSM, or it may come from a direct source. For the POC, the direct source was used. Dynamic traffic information is forwarded to Map Database Management for further processing.

Geo-Coding Element

Geo-Coding converts route requests containing destination addresses and the geographic (latitude/longitude) position of the vehicle into road network elements that correspond to the Map Database used by Map Database Management. This is a necessary step since the Map Database uses a special road segment format to create a road network that only indirectly corresponds to geographic positions or street addresses. Once the conversion is complete the Geo-Coded destination and route origin and destination are passed to the Route Calculation Element.

Route Calculation Element

Route Calculation computes a route with the shortest travel time, taking advantage of current dynamic link information.

Route Calculation receives Geo-Coded origin and destination information from Geo-Coding. It then interacts with Map Database Management to compute the route. This process uses well established algorithms to search the road network database to identify and compare various paths through the road network to get from the origin (the vehicle's current location) to the destination. Part of the comparison process involves using travel time for various road segments to determine the combination of roads that results in the shortest possible overall travel time. The computed route at this point consists of a sequence of road segment identifiers. These are passed to Direction Generation to create a human usable list of maneuvers.

Direction Generation Element

Direction Generation creates turn-by-turn maneuver information for a route based on the road segment list generated by Route Calculation. This element uses a combination of pre-stored maneuvers (for example “Bear Right,” or “Turn Left”, etc) and detailed information about the intersections of the road segments to create a textual sequence of maneuvers. These are then sent to the OBNA Vehicle Component via Communications and the TSM.

Direction Generation also passed the turn-by-turn direction list to the Map Image Generation element which creates the various map images used by the vehicle system.

Map Image Generation Element

Map Image Generation creates several map images corresponding to the various components of the route information package. Specifically, it creates a Route Overview map showing the entire route from start to finish as shown in Figure 4-85. It creates segment maps showing the route over smaller higher resolution portions of the route, and it generates individual diagrammatic maps images showing each specific maneuver in the turn list, as shown in Figure 4-86. Map Generation passes these map image files to Communications which forwards them to the OBNA Vehicle Component for display to the driver.

A diagram shows major streets and highways on a map, highlighting the route to destination. Diagram provided by Navteq

Figure 4-85 OBNA Overview Map


A diagram shows local travel area, highlighting route with directional arrows. Diagram provided by Navteq

Figure 4-86 OBNA Maneuver Map

Map Database Management Functional Element

Map Database Management updates the map database with current dynamic link information received from the VII Traveler Information Application and Dynamic Link Data Provider(s) (e.g. MDOT's Data Use Analysis and Processing Service).

4.7.7.2 Off-Board Navigation Application Flow of Events

The Off-Board Navigation flow of events represents the nominal case operational flow for the VII OBNA.

Pre-conditions

  1. The OBNA Vehicle Component has registered with the OCM for the VII system Communication Service.
  2. The OBNA Vehicle Component has a subscription with the OBNA Network Component.
  3. The Vehicle Component is preconfigured to know the URL of the Network Component.
  4. A Dynamic Link Data Provider has up-to-date information in its store needed to generate dynamic traffic information.
  5. The OBNA Network Component has subscribed to receive specific information from the Dynamic Link Data Provider(s) via the TSM.
  6. The OBNA Vehicle Component has a list of pre-set destinations.

Flow of Events

  1. Using the OBE HMI the driver activates the OBNA.
  2. The OBNA Vehicle Component presents a destination list to the driver using the vehicle HMI.
  3. The driver selects a destination from the displayed destination list.
  4. Driver is notified that route guidance will not be available until the vehicle comes within communication range of an RSE that supports communications to the TSM.
  5. The vehicle comes within communication range of an RSE that supports communications to the TSM.
  6. The OBNA Vehicle Component sends a request for route guidance to the OBNA Network Component, including user identification, current position and selected destination.
  7. The OBNA Network Component determines the route with the shortest travel time using current dynamic link information.
  8. The OBNA network component sends route guidance information to the OBNA Vehicle component over one or more RSEs.
  9. In the event that the vehicle leaves the RSE communications zone before the OBNA Network Component responds, the scenario proceeds as per event 10 through 14.
  10. The TSM determines that the OBNA Vehicle Component is not responding, and saves the messages from the OBNA Network Component.
  11. The Vehicle enters the communications zone of another RSE.
  12. The OCM communicates with the TSM, and re-establishes the transaction session using the IP address of the new RSE.
  13. The TSM re-sends the (remaining) route guidance information to the OBNA Vehicle Component via the new RSE.
  14. The driver is presented with the route guidance information.

4.7.8 Heartbeat Application

The Heartbeat Application generates and sends Heartbeat messages at a configurable rate, and logs all Heartbeat messages received from other sources. The Heartbeat message is a WSM defined by the SAE J2735 standard. It contains vehicle position and speed and a few other vehicle related parameters.

The intent of the POC Heartbeat Application was not to test the utility of the Heartbeat message itself (as a safety element) but to test the ability of DSRC to support high-rate high-priority messaging in the presence of other DSRC uses. Figure 4-87 shows how the Vehicle Heartbeat Generation Application fits into the VII POC architecture.

Block diagram with five elements. The central element is labeled VII Infrastructure System with Communications Service in highlighting. To the left, an element labeled On-Board Equipment includes Heartbeat Application and connects down to an element labeled Vehicle Systems. Lower, another element labeled On-Board Equipment includes Heartbeat Application and connects down to an element labeled Vehicle  Systems. Communications Service in the central element connects to Heartbeat Application in both elements to the left.

Figure 4-87 Vehicle Heartbeat Generation Application System Overlay Diagram

4.7.8.1 POC Heartbeat Application Architecture

The Heartbeat Vehicle Component (HBVC), as shown in Figure 4-88, consists of three elements, Heartbeat Generation, Heartbeat Transmission and Logging:

Block diagram with seven elements. The central element is labeled Heartbeat Vehicle Component and includes Heartbeat Generation, Heartbeat Transmission, and Logging. Two elements to the left labeled Positioning Service and Vehicle Interface Service connect to the central element. Below, an element labeled Logging Service connects to the central element. One element to the right is labeled Communications Manager and connects to the central element, and also down to an element labeled Security Service. Communications Manager also connects via a dashed line to an element labeled VII System (Other OBEs).

Figure 4-88 Heartbeat Vehicle Component Functional Elements Overview

Heartbeat Generation Element

Heartbeat Generation combines vehicle sensor data from the VIS and positioning data from the Positioning Service into the periodic Heartbeat message. The generation policy may be changed by changing configuration parameters.

This data is collected on a regular schedule and compiled into a Heartbeat WSM message in accordance with the content and format defined in SAE J2735. The snapshot will be generated with whatever data is provided by the API including null or zero values.

The compilation schedule is set by a configuration parameter and it may be set to any rate from zero up to 50 Hz (one message every 20 ms).

Heartbeat Transmission Element

Heartbeat Transmission passes generated Heartbeats to the Communications Manager for broadcast using the DSRC Radio.

Heartbeat Transmission logs all messages, sent and received, for the purposes of analysis, debugging and testing, and deletes the messages after they are logged.

4.7.8.2 Heartbeat Application Flow of Events

The following flow of events describes each step executed by the Heartbeat Application during normal operation.

Preconditions

  1. The Heartbeat Application in Vehicle A has registered with the OCM to send and receive Heartbeat WSMs.
  2. The Heartbeat Application in Vehicle B has registered with the OCM to send and receive Heartbeat WSMs. Optionally, additional vehicles running the Heartbeat Application may also be present.
  3. Vehicles A and B are in DSRC Radio range of each other.

Flow of Events

  1. The Heartbeat Application in Vehicle A collects data from the Positioning Service and Vehicle Interface and compiles a Heartbeat Message.
  2. The Heartbeat Application in Vehicle A passes the Heartbeat message to the Communications Manager in Vehicle A.
  3. The Communications Manager in Vehicle A optionally signs the message using the OBE Security Services, and submits the Heartbeat message to the DSRC Radio for transmission.
  4. The DSRC Radio in Vehicle A transmits the message when the DSRC channel is clear.
  5. The DSRC Radio in Vehicle B receives the Heartbeat message, and passes the message to the Communications Manager in Vehicle B.
  6. The Communications Manager in Vehicle B optionally verifies the received message using the Vehicle B OBE Security Services.
  7. The Communications Manager in Vehicle B passes the verified message to the Heartbeat application in Vehicle B.
  8. The Vehicle B Heartbeat Application logs the receipt of the message and discards the message.
  9. Vehicles A and B repeat this operation, each serving as both sender (Vehicle A above) and receiver (Vehicle B above) on a schedule set by an internal configuration parameter for each vehicle.

4.8 Network Description

The infrastructure network is shown schematically in Figure 4-89. This figure primarily illustrates the internal structure of the RSE, and the SDN. Many details have been omitted for clarity. A more detailed description is provided in Volume 2b.

The SDN is composed of interfaces to the Backbone (to other SDNs), the backhaul (to RSEs) and the Access Gateway (to Network Users), routing functions to properly direct messages traffic and a set of core services.

Block diagram with five elements in two rows. On the top row left an element labeled RSC includes Probe Data Proxy, AMDS Proxy, Lightweight Proxy, Radio Handler, DSRC Radio, and Routing and Local Interface. On the top row right an element labeled Service Delivery Node includes Probe Data Collection, Probe Data Distribution, AMDS, Info Lookup Service, Routing, Backhaul Interface, Backbone Interface, and Certificate Authority. On the bottom row left an element is labeled Local Transaction Processor. On the bottom row right, an element labeled Enterprise Network Operations Center is positioned adjacent to an element labeled Certificate Authority. Both elements connect up to the element labeled Service Delivery Node. In the first element, Routing and Local Interface connects across to Backhaul Interface in the element labeled Service Delivery Node and down to the element labeled Local Transaction Processor.

Figure 4-89 Infrastructure Side System

The POC network core services include:

Advisory Message Delivery Service (AMDS) accepts submitted messages from Network Users via the Access Gateway. These messages include delivery instructions such as RSE ID(s), repeat timing and message lifespan. The AMDS then passes these messages to the AMDS proxies resident in the appropriate RSEs for local broadcast to OBEs in the vicinity.

Probe Data Collection Service (PDC) interacts with the Probe Data Proxy in the RSE to accept a stream of probe data messages gathered as OBEs pass the RSE. The PDC then passes this data to the Probe Data Distribution Service (PDDS).

Probe Data Distribution Service (PDDS) accepts a stream of probe data messages from the PDC. It then parses these messages and places the various content elements (different probe data parameters such as speed, vehicle status, events, etc) into queues that are structured along these topical categories. The data in these topical queues is then sent via the Access Gateway to Network users that have established subscriptions on the basis of these topics. So, a network user that subscribes to Topic A at locations X, Y, and Z will receive any data associated with Topic A that is collected at any of the specified locations.

Information Lookup Service (ILS) is a support service used by Network users to determine information about the system. It is most often used to identify RSEs according to location so that a subscriber or provider can then properly reference the RSE.

Certificate Authority (CA) is a central point of trust in the system. The CA provides certificate to OBEs that attest to the authenticity and legitimacy of an OBE for use in singing both identified and anonymous messages, and provide certificates to other users to allow them to exchange signed and encrypted messages with OBE applications.

Other Services such as a Map Element Generator (MEG), a Map Element Distribution System (MEDS), Network Identity and Access Management Service, but these are outside the scope of this discussion.

The RSE is composed of the DSRC Radio subsystem, a routing function, and a set of proxy applications that extend the services residing at the SDN (described above) out to each RSE associated with that SDN. The proxies essentially pass messages to and from their counterpart SDN services and interface to the RSE radio subsystem. The radio subsystem includes a DSRC Radio, and a Radio Handler that accepts or sends messages from/to the various proxies. The radio handler also constructs or updates a play list that contains all broadcast messages to be transmitted.

Depending on the situation, an RSE may be connected to a LTP. This may be, for example, a local tolling system, or a traffic signal controller. In operation the LTP sends and receives messages to OBEs and to Network Users through the RSE functions. These messages usually have local relevance (as in tolling or signals) and thus need to originate local to the RSE.

The ENOC is used by system operators to control and manage the overall network and RSE suite.

The CA issues security credentials to elements of the system that require them. It also manages the overall security state of the system.

4.9 Roadside Equipment

The RSE is a self-contained unit installed at a given location along with the appropriate backhaul equipment. It acts as the gateway between the vehicle and the rest of the infrastructure. The RSE announces the services offered by the network and passes data between vehicles and network users.

The Radio portion of the RSE operates in the 5.9 GHz DSRC band with the IEEE 802.11p, IEEE 1609, and SAE J2735 communications stack. The following communications protocols are supported by the DSRC Radio:

The backhaul connection from the RSE to the Michigan SDN is standard IPv4 and IPv6. The RSE is also equipped with GPS for self-positioning and providing Position Corrections to vehicles.

When deployed, security will enable the RSE to authenticate and validate vehicles, maintain a vehicle certificate revocation list and maintain its own certificates.

4.10 Service Delivery Node

The SDN houses the core service infrastructure of the VII system. The SDN contains server platforms, data stores, and software systems that support VII system data distribution and communications services. The SDN provides logical interfaces for RSE and network application connectivity. Traffic destined for or originating from RSEs will utilize multiple types of wired or wireless backhaul communications technologies interconnecting RSEs to their associated SDN. The SDN also provides connectivity for a number of public and private network user applications. These applications provide support for distribution of public probe data, generation of maps, dissemination of positioning correction information, advisory message delivery and dispatching, network management and Security Services. The VII network is made up of multiple Service Delivery Notes (SDNs), each of which represents a logical entity for the set of interfaces and routing functions that provide connectivity and ingress/egress traffic flow to the VII network. VII network traffic is distributed and flows from one SDN to another SDN across a backbone network.

4.11 Certificate Authority

The CA structure for the VII system is shown in Figure 4-90.

This structure identifies five types of CA:

  1. Identified OBE Certifying Authority
  2. OBE Authorizing Authority (OAA)
  3. Anonymous OBE Certifying Authority (AOCA)
  4. Infrastructure CA
  5. Root CA

Block diagram with seven elements in three rows. At the top is an element labeled Root Certificate Authority. The second row has four elements labeled Infrastructure Certificate Authority, Identified OBE Certificate Authority, Anonymous OBE Certifying Authority, and OBE Authorizing Authority. The bottom row as an element labels RSEs, Network, Users, etc. and an element labeled VII Vehicle Segment including Identified OBE Applications and Anonymous OBE Applications. Root Certificate Authority at the top connects down to Infrastructure Certificate Authority and down to RSEs, Network Users, etc. The top element also connects down to Identified OBE Certificate Authority and down to Identified OBE Applications. The top element also connects down to Anonymous OBE Certifying Authority, and down to Anonymous OBE Applications. Anonymous OBE Certifying Authority has a two-way connection across to OBE Authorizing Authority. The element VII Vehicle Segment loops to the right and up to OBE Authorizing Authority.

Figure 4-90 CA Structure

These CAs are shown as separate logical entities for clarity. It is assumed that they might be combined and/or regionally distributed to optimize system performance. The roles and responsibilities of identified CAs (Infrastructure, Identified OBE and Root) are well defined in many security standards, and these will not be discussed here other than to point out that the Identified OBE CA will issue certificates that are used by the OBE to encrypt and authenticate transactions that rely on identification as the key element of legitimacy and assurance. These types of transactions typically include purchases and/or transactions where the two parties have established a trusted identified relationship (e.g. a service provider and a vehicle with an established account with that provider). It is also important to point out that the various lower level CAs need to tier to a single root authority so that certificates from users, RSEs, etc., can be verified by the vehicle security systems, and vice versa.

The VII system architecture uses two different types of security credentials:

Since anonymous keys are used by many different users, the question arises as to what is actually being certified when the CA issues credentials to a user. Since the security credentials are only used by the security function, it is possible to prevent their use (e.g. by encrypting them or locking them) unless the vehicle system is able to prove that it has not been in some way tampered with or changed. Using this approach, an anonymous signature certifies that the message was sent by a vehicle system that was able to pass the tampering / legitimacy test. While this approach was developed conceptually during the POC program, there was insufficient time to implement or test this concept, and it remains a future task to determine what extent of test should be required, but the mechanism for this process is part of the current VII system design.

Since there is always the threat of key compromise, the certificates associated with keys (identified or anonymous) are designed to have a finite lifetime. As certificates expire, the security functions in the vehicle will replace them through secure transactions with the CA.

Of particular relevance to this document, are the roles of the OBE Certifying and Authorizing authorities. These entities are central to the means for managing anonymous certificates that are used when the receiving parties are not trusted, and when the assurance does not rely on identification.

To preserve anonymity, the VII system uses a shared pool of certificates and keys (credentials). Since these credentials need to be regularly updated (replaced as they expire or are revoked), the OBE must include a mechanism for requesting and managing credentials. However, since any credential provisioning transaction must be encrypted, it is difficult to prevent the entity providing these credentials from knowing the identity of the vehicle requesting them. The process and structure described in Figure 4-91 is intended to provide this desired anonymity.

Block diagram with seven elements in five rows. The top row has an element labeled Sealed Keys that connects to the left to an element labeled OBE Security Functions, and connects down to OBE Certification Manager. also, an element on the left labeled OBE Verification Function connects to the OBE Certification Manager, which has a two-way connection down to an element labeled Anonymous OBE Certifying Authority. This element also has a two-way connection down to an element labeled OBE Authorizing Authority. At the bottom, an element labeled Verification Code leads up to the element OBE Authorizing Authority.

Figure 4-91 Anonymous Certificate Management

The anonymous certification process operates as follows:

The OBE CM sends a request to the AOCA. This message includes an OBE Verification Code representing the current physical and software state of the OBE. This is encrypted with the OAA's public key. The message also includes a symmetric key encrypted with the Anonymous OBE Certifying Authority's public key. This means that the AOCA is unable to determine any information about the OBE requesting the credentials.

The AOCA assigns a temporary ID to the request and passes it in its encrypted form to the OAA. The OAA decrypts the request and verifies that the OBE Verification Code is correct (i.e., that it is entitled to request these credentials, and that it has not been somehow tampered with), and sends an authorization to the AOCA.

The AOCA then randomly selects the credentials from the anonymous pool, encrypts them using the symmetric key provided by the OBE, and sends them to the requesting OBE.

Using this system, the OAA knows that a particular identified OBE requested credentials, and it has determined that that OBE is legitimate (via its OBE Verification Code). However, while the OAA knows the identity of the OBE, it does not know which certificates were supplied to that OBE. Similarly, while the AOCA knows which certificates were issued, it does not know to which OBE they were issued (since the temporary ID is not maintained). Since the OAA and AOCA are separate entities, there is no linkage between the certificates provided to the vehicle by the AOCA and the OBE identity.

As a result, the OBE, upon proving that it is legitimate to the OAA can obtain security credentials that cannot be traced to its identity assuring the recipient that the sender was legitimate.

4.12 Test Track

Formal system testing was carried out under controlled conditions at a test facility provided by Chrysler. The Chrysler facility is shown in Figure 4-92. This track was set up with two RSEs situated so that the footprints would not overlap. The facility was used to refine many of the applications under real world conditions without the risks and inconvenience of operating in live traffic on an open road. The facility was also used extensively to test the system operation with vehicles running at high speed past RSEs, and also to test the positioning system dynamics.

An aerial photograph shows a view of the track. Symbols indicate locations of test-related equipment, such as North RSE and South RSE. About two thirds of the track is located in an area with forest and small bodies of water.

Figure 4-92 Test Track Facility

4.13 Development Test Environment

Figure 4-93 provides an architectural overview of the entire DTE including components residing in Herndon, VA and in the Michigan DTE. The DTE setup includes 55 RSE's, 11 along freeways and 44 along arterials as shown in Figure 4-94. A typical RSE installation is shown in Figure 4-95.

The DTE also included a Michigan SDN and the Michigan Network Access Point (MINAP). The ENOC, located in Herndon VA, monitors and manages all components.

DTE RSE's are connected to the MI SDN via one of three backhaul communications technologies:

The following services are provided by the Michigan DTE to vehicles and network users:

Network diagram with icons showing the grouping of laptops and servers on the right interconnected to the local network supporting the VII, and connecting further to components and vehicles on the left side of the diagram. Diagram provided by Booz Allen Hamilton

Figure 4-93 Overall VII Network System


Aerial photograph of the area where the test environment was established, with labels indicating locations of 3G, Wireline, and WiMAX on or along the test highway. Google Earth map with modifications by Booz Allen Hamilton

Figure 4-94 Demonstration Test Environment Map


A photograph shows an RSE unit mounted on a bracket extending from a light pole.

Figure 4-95 Typical RSE Installation

< 3 VII POC Program Overview | Table of Contents