cachepc-linux

Fork of AMDESE/linux with modifications for CachePC side-channel attack
git clone https://git.sinitax.com/sinitax/cachepc-linux
Log | Files | Refs | README | LICENSE | sfeed.txt

ctucanfd-driver.rst (26083B)


      1.. SPDX-License-Identifier: GPL-2.0-or-later
      2
      3CTU CAN FD Driver
      4=================
      5
      6Author: Martin Jerabek <martin.jerabek01@gmail.com>
      7
      8
      9About CTU CAN FD IP Core
     10------------------------
     11
     12`CTU CAN FD <https://gitlab.fel.cvut.cz/canbus/ctucanfd_ip_core>`_
     13is an open source soft core written in VHDL.
     14It originated in 2015 as Ondrej Ille's project
     15at the `Department of Measurement <https://meas.fel.cvut.cz/>`_
     16of `FEE <http://www.fel.cvut.cz/en/>`_ at `CTU <https://www.cvut.cz/en>`_.
     17
     18The SocketCAN driver for Xilinx Zynq SoC based MicroZed board
     19`Vivado integration <https://gitlab.fel.cvut.cz/canbus/zynq/zynq-can-sja1000-top>`_
     20and Intel Cyclone V 5CSEMA4U23C6 based DE0-Nano-SoC Terasic board
     21`QSys integration <https://gitlab.fel.cvut.cz/canbus/intel-soc-ctucanfd>`_
     22has been developed as well as support for
     23`PCIe integration <https://gitlab.fel.cvut.cz/canbus/pcie-ctucanfd>`_ of the core.
     24
     25In the case of Zynq, the core is connected via the APB system bus, which does
     26not have enumeration support, and the device must be specified in Device Tree.
     27This kind of devices is called platform device in the kernel and is
     28handled by a platform device driver.
     29
     30The basic functional model of the CTU CAN FD peripheral has been
     31accepted into QEMU mainline. See QEMU `CAN emulation support <https://www.qemu.org/docs/master/system/devices/can.html>`_
     32for CAN FD buses, host connection and CTU CAN FD core emulation. The development
     33version of emulation support can be cloned from ctu-canfd branch of QEMU local
     34development `repository <https://gitlab.fel.cvut.cz/canbus/qemu-canbus>`_.
     35
     36
     37About SocketCAN
     38---------------
     39
     40SocketCAN is a standard common interface for CAN devices in the Linux
     41kernel. As the name suggests, the bus is accessed via sockets, similarly
     42to common network devices. The reasoning behind this is in depth
     43described in `Linux SocketCAN <https://www.kernel.org/doc/html/latest/networking/can.html>`_.
     44In short, it offers a
     45natural way to implement and work with higher layer protocols over CAN,
     46in the same way as, e.g., UDP/IP over Ethernet.
     47
     48Device probe
     49~~~~~~~~~~~~
     50
     51Before going into detail about the structure of a CAN bus device driver,
     52let's reiterate how the kernel gets to know about the device at all.
     53Some buses, like PCI or PCIe, support device enumeration. That is, when
     54the system boots, it discovers all the devices on the bus and reads
     55their configuration. The kernel identifies the device via its vendor ID
     56and device ID, and if there is a driver registered for this identifier
     57combination, its probe method is invoked to populate the driver's
     58instance for the given hardware. A similar situation goes with USB, only
     59it allows for device hot-plug.
     60
     61The situation is different for peripherals which are directly embedded
     62in the SoC and connected to an internal system bus (AXI, APB, Avalon,
     63and others). These buses do not support enumeration, and thus the kernel
     64has to learn about the devices from elsewhere. This is exactly what the
     65Device Tree was made for.
     66
     67Device tree
     68~~~~~~~~~~~
     69
     70An entry in device tree states that a device exists in the system, how
     71it is reachable (on which bus it resides) and its configuration –
     72registers address, interrupts and so on. An example of such a device
     73tree is given in .
     74
     75::
     76
     77           / {
     78               /* ... */
     79               amba: amba {
     80                   #address-cells = <1>;
     81                   #size-cells = <1>;
     82                   compatible = "simple-bus";
     83
     84                   CTU_CAN_FD_0: CTU_CAN_FD@43c30000 {
     85                       compatible = "ctu,ctucanfd";
     86                       interrupt-parent = <&intc>;
     87                       interrupts = <0 30 4>;
     88                       clocks = <&clkc 15>;
     89                       reg = <0x43c30000 0x10000>;
     90                   };
     91               };
     92           };
     93
     94
     95.. _sec:socketcan:drv:
     96
     97Driver structure
     98~~~~~~~~~~~~~~~~
     99
    100The driver can be divided into two parts – platform-dependent device
    101discovery and set up, and platform-independent CAN network device
    102implementation.
    103
    104.. _sec:socketcan:platdev:
    105
    106Platform device driver
    107^^^^^^^^^^^^^^^^^^^^^^
    108
    109In the case of Zynq, the core is connected via the AXI system bus, which
    110does not have enumeration support, and the device must be specified in
    111Device Tree. This kind of devices is called *platform device* in the
    112kernel and is handled by a *platform device driver*\  [1]_.
    113
    114A platform device driver provides the following things:
    115
    116-  A *probe* function
    117
    118-  A *remove* function
    119
    120-  A table of *compatible* devices that the driver can handle
    121
    122The *probe* function is called exactly once when the device appears (or
    123the driver is loaded, whichever happens later). If there are more
    124devices handled by the same driver, the *probe* function is called for
    125each one of them. Its role is to allocate and initialize resources
    126required for handling the device, as well as set up low-level functions
    127for the platform-independent layer, e.g., *read_reg* and *write_reg*.
    128After that, the driver registers the device to a higher layer, in our
    129case as a *network device*.
    130
    131The *remove* function is called when the device disappears, or the
    132driver is about to be unloaded. It serves to free the resources
    133allocated in *probe* and to unregister the device from higher layers.
    134
    135Finally, the table of *compatible* devices states which devices the
    136driver can handle. The Device Tree entry ``compatible`` is matched
    137against the tables of all *platform drivers*.
    138
    139.. code:: c
    140
    141           /* Match table for OF platform binding */
    142           static const struct of_device_id ctucan_of_match[] = {
    143               { .compatible = "ctu,canfd-2", },
    144               { .compatible = "ctu,ctucanfd", },
    145               { /* end of list */ },
    146           };
    147           MODULE_DEVICE_TABLE(of, ctucan_of_match);
    148
    149           static int ctucan_probe(struct platform_device *pdev);
    150           static int ctucan_remove(struct platform_device *pdev);
    151
    152           static struct platform_driver ctucanfd_driver = {
    153               .probe  = ctucan_probe,
    154               .remove = ctucan_remove,
    155               .driver = {
    156                   .name = DRIVER_NAME,
    157                   .of_match_table = ctucan_of_match,
    158               },
    159           };
    160           module_platform_driver(ctucanfd_driver);
    161
    162
    163.. _sec:socketcan:netdev:
    164
    165Network device driver
    166^^^^^^^^^^^^^^^^^^^^^
    167
    168Each network device must support at least these operations:
    169
    170-  Bring the device up: ``ndo_open``
    171
    172-  Bring the device down: ``ndo_close``
    173
    174-  Submit TX frames to the device: ``ndo_start_xmit``
    175
    176-  Signal TX completion and errors to the network subsystem: ISR
    177
    178-  Submit RX frames to the network subsystem: ISR and NAPI
    179
    180There are two possible event sources: the device and the network
    181subsystem. Device events are usually signaled via an interrupt, handled
    182in an Interrupt Service Routine (ISR). Handlers for the events
    183originating in the network subsystem are then specified in
    184``struct net_device_ops``.
    185
    186When the device is brought up, e.g., by calling ``ip link set can0 up``,
    187the driver’s function ``ndo_open`` is called. It should validate the
    188interface configuration and configure and enable the device. The
    189analogous opposite is ``ndo_close``, called when the device is being
    190brought down, be it explicitly or implicitly.
    191
    192When the system should transmit a frame, it does so by calling
    193``ndo_start_xmit``, which enqueues the frame into the device. If the
    194device HW queue (FIFO, mailboxes or whatever the implementation is)
    195becomes full, the ``ndo_start_xmit`` implementation informs the network
    196subsystem that it should stop the TX queue (via ``netif_stop_queue``).
    197It is then re-enabled later in ISR when the device has some space
    198available again and is able to enqueue another frame.
    199
    200All the device events are handled in ISR, namely:
    201
    202#. **TX completion**. When the device successfully finishes transmitting
    203   a frame, the frame is echoed locally. On error, an informative error
    204   frame [2]_ is sent to the network subsystem instead. In both cases,
    205   the software TX queue is resumed so that more frames may be sent.
    206
    207#. **Error condition**. If something goes wrong (e.g., the device goes
    208   bus-off or RX overrun happens), error counters are updated, and
    209   informative error frames are enqueued to SW RX queue.
    210
    211#. **RX buffer not empty**. In this case, read the RX frames and enqueue
    212   them to SW RX queue. Usually NAPI is used as a middle layer (see ).
    213
    214.. _sec:socketcan:napi:
    215
    216NAPI
    217~~~~
    218
    219The frequency of incoming frames can be high and the overhead to invoke
    220the interrupt service routine for each frame can cause significant
    221system load. There are multiple mechanisms in the Linux kernel to deal
    222with this situation. They evolved over the years of Linux kernel
    223development and enhancements. For network devices, the current standard
    224is NAPI – *the New API*. It is similar to classical top-half/bottom-half
    225interrupt handling in that it only acknowledges the interrupt in the ISR
    226and signals that the rest of the processing should be done in softirq
    227context. On top of that, it offers the possibility to *poll* for new
    228frames for a while. This has a potential to avoid the costly round of
    229enabling interrupts, handling an incoming IRQ in ISR, re-enabling the
    230softirq and switching context back to softirq.
    231
    232More detailed documentation of NAPI may be found on the pages of Linux
    233Foundation `<https://wiki.linuxfoundation.org/networking/napi>`_.
    234
    235Integrating the core to Xilinx Zynq
    236-----------------------------------
    237
    238The core interfaces a simple subset of the Avalon
    239(search for Intel **Avalon Interface Specifications**)
    240bus as it was originally used on
    241Alterra FPGA chips, yet Xilinx natively interfaces with AXI
    242(search for ARM **AMBA AXI and ACE Protocol Specification AXI3,
    243AXI4, and AXI4-Lite, ACE and ACE-Lite**).
    244The most obvious solution would be to use
    245an Avalon/AXI bridge or implement some simple conversion entity.
    246However, the core’s interface is half-duplex with no handshake
    247signaling, whereas AXI is full duplex with two-way signaling. Moreover,
    248even AXI-Lite slave interface is quite resource-intensive, and the
    249flexibility and speed of AXI are not required for a CAN core.
    250
    251Thus a much simpler bus was chosen – APB (Advanced Peripheral Bus)
    252(search for ARM **AMBA APB Protocol Specification**).
    253APB-AXI bridge is directly available in
    254Xilinx Vivado, and the interface adaptor entity is just a few simple
    255combinatorial assignments.
    256
    257Finally, to be able to include the core in a block diagram as a custom
    258IP, the core, together with the APB interface, has been packaged as a
    259Vivado component.
    260
    261CTU CAN FD Driver design
    262------------------------
    263
    264The general structure of a CAN device driver has already been examined
    265in . The next paragraphs provide a more detailed description of the CTU
    266CAN FD core driver in particular.
    267
    268Low-level driver
    269~~~~~~~~~~~~~~~~
    270
    271The core is not intended to be used solely with SocketCAN, and thus it
    272is desirable to have an OS-independent low-level driver. This low-level
    273driver can then be used in implementations of OS driver or directly
    274either on bare metal or in a user-space application. Another advantage
    275is that if the hardware slightly changes, only the low-level driver
    276needs to be modified.
    277
    278The code [3]_ is in part automatically generated and in part written
    279manually by the core author, with contributions of the thesis’ author.
    280The low-level driver supports operations such as: set bit timing, set
    281controller mode, enable/disable, read RX frame, write TX frame, and so
    282on.
    283
    284Configuring bit timing
    285~~~~~~~~~~~~~~~~~~~~~~
    286
    287On CAN, each bit is divided into four segments: SYNC, PROP, PHASE1, and
    288PHASE2. Their duration is expressed in multiples of a Time Quantum
    289(details in `CAN Specification, Version 2.0 <http://esd.cs.ucr.edu/webres/can20.pdf>`_, chapter 8).
    290When configuring
    291bitrate, the durations of all the segments (and time quantum) must be
    292computed from the bitrate and Sample Point. This is performed
    293independently for both the Nominal bitrate and Data bitrate for CAN FD.
    294
    295SocketCAN is fairly flexible and offers either highly customized
    296configuration by setting all the segment durations manually, or a
    297convenient configuration by setting just the bitrate and sample point
    298(and even that is chosen automatically per Bosch recommendation if not
    299specified). However, each CAN controller may have different base clock
    300frequency and different width of segment duration registers. The
    301algorithm thus needs the minimum and maximum values for the durations
    302(and clock prescaler) and tries to optimize the numbers to fit both the
    303constraints and the requested parameters.
    304
    305.. code:: c
    306
    307           struct can_bittiming_const {
    308               char name[16];      /* Name of the CAN controller hardware */
    309               __u32 tseg1_min;    /* Time segment 1 = prop_seg + phase_seg1 */
    310               __u32 tseg1_max;
    311               __u32 tseg2_min;    /* Time segment 2 = phase_seg2 */
    312               __u32 tseg2_max;
    313               __u32 sjw_max;      /* Synchronisation jump width */
    314               __u32 brp_min;      /* Bit-rate prescaler */
    315               __u32 brp_max;
    316               __u32 brp_inc;
    317           };
    318
    319
    320[lst:can_bittiming_const]
    321
    322A curious reader will notice that the durations of the segments PROP_SEG
    323and PHASE_SEG1 are not determined separately but rather combined and
    324then, by default, the resulting TSEG1 is evenly divided between PROP_SEG
    325and PHASE_SEG1. In practice, this has virtually no consequences as the
    326sample point is between PHASE_SEG1 and PHASE_SEG2. In CTU CAN FD,
    327however, the duration registers ``PROP`` and ``PH1`` have different
    328widths (6 and 7 bits, respectively), so the auto-computed values might
    329overflow the shorter register and must thus be redistributed among the
    330two [4]_.
    331
    332Handling RX
    333~~~~~~~~~~~
    334
    335Frame reception is handled in NAPI queue, which is enabled from ISR when
    336the RXNE (RX FIFO Not Empty) bit is set. Frames are read one by one
    337until either no frame is left in the RX FIFO or the maximum work quota
    338has been reached for the NAPI poll run (see ). Each frame is then passed
    339to the network interface RX queue.
    340
    341An incoming frame may be either a CAN 2.0 frame or a CAN FD frame. The
    342way to distinguish between these two in the kernel is to allocate either
    343``struct can_frame`` or ``struct canfd_frame``, the two having different
    344sizes. In the controller, the information about the frame type is stored
    345in the first word of RX FIFO.
    346
    347This brings us a chicken-egg problem: we want to allocate the ``skb``
    348for the frame, and only if it succeeds, fetch the frame from FIFO;
    349otherwise keep it there for later. But to be able to allocate the
    350correct ``skb``, we have to fetch the first work of FIFO. There are
    351several possible solutions:
    352
    353#. Read the word, then allocate. If it fails, discard the rest of the
    354   frame. When the system is low on memory, the situation is bad anyway.
    355
    356#. Always allocate ``skb`` big enough for an FD frame beforehand. Then
    357   tweak the ``skb`` internals to look like it has been allocated for
    358   the smaller CAN 2.0 frame.
    359
    360#. Add option to peek into the FIFO instead of consuming the word.
    361
    362#. If the allocation fails, store the read word into driver’s data. On
    363   the next try, use the stored word instead of reading it again.
    364
    365Option 1 is simple enough, but not very satisfying if we could do
    366better. Option 2 is not acceptable, as it would require modifying the
    367private state of an integral kernel structure. The slightly higher
    368memory consumption is just a virtual cherry on top of the “cake”. Option
    3693 requires non-trivial HW changes and is not ideal from the HW point of
    370view.
    371
    372Option 4 seems like a good compromise, with its disadvantage being that
    373a partial frame may stay in the FIFO for a prolonged time. Nonetheless,
    374there may be just one owner of the RX FIFO, and thus no one else should
    375see the partial frame (disregarding some exotic debugging scenarios).
    376Basides, the driver resets the core on its initialization, so the
    377partial frame cannot be “adopted” either. In the end, option 4 was
    378selected [5]_.
    379
    380.. _subsec:ctucanfd:rxtimestamp:
    381
    382Timestamping RX frames
    383^^^^^^^^^^^^^^^^^^^^^^
    384
    385The CTU CAN FD core reports the exact timestamp when the frame has been
    386received. The timestamp is by default captured at the sample point of
    387the last bit of EOF but is configurable to be captured at the SOF bit.
    388The timestamp source is external to the core and may be up to 64 bits
    389wide. At the time of writing, passing the timestamp from kernel to
    390userspace is not yet implemented, but is planned in the future.
    391
    392Handling TX
    393~~~~~~~~~~~
    394
    395The CTU CAN FD core has 4 independent TX buffers, each with its own
    396state and priority. When the core wants to transmit, a TX buffer in
    397Ready state with the highest priority is selected.
    398
    399The priorities are 3bit numbers in register TX_PRIORITY
    400(nibble-aligned). This should be flexible enough for most use cases.
    401SocketCAN, however, supports only one FIFO queue for outgoing
    402frames [6]_. The buffer priorities may be used to simulate the FIFO
    403behavior by assigning each buffer a distinct priority and *rotating* the
    404priorities after a frame transmission is completed.
    405
    406In addition to priority rotation, the SW must maintain head and tail
    407pointers into the FIFO formed by the TX buffers to be able to determine
    408which buffer should be used for next frame (``txb_head``) and which
    409should be the first completed one (``txb_tail``). The actual buffer
    410indices are (obviously) modulo 4 (number of TX buffers), but the
    411pointers must be at least one bit wider to be able to distinguish
    412between FIFO full and FIFO empty – in this situation,
    413:math:`txb\_head \equiv txb\_tail\ (\textrm{mod}\ 4)`. An example of how
    414the FIFO is maintained, together with priority rotation, is depicted in
    415
    416|
    417
    418+------+---+---+---+---+
    419| TXB# | 0 | 1 | 2 | 3 |
    420+======+===+===+===+===+
    421| Seq  | A | B | C |   |
    422+------+---+---+---+---+
    423| Prio | 7 | 6 | 5 | 4 |
    424+------+---+---+---+---+
    425|      |   | T |   | H |
    426+------+---+---+---+---+
    427
    428|
    429
    430+------+---+---+---+---+
    431| TXB# | 0 | 1 | 2 | 3 |
    432+======+===+===+===+===+
    433| Seq  |   | B | C |   |
    434+------+---+---+---+---+
    435| Prio | 4 | 7 | 6 | 5 |
    436+------+---+---+---+---+
    437|      |   | T |   | H |
    438+------+---+---+---+---+
    439
    440|
    441
    442+------+---+---+---+---+----+
    443| TXB# | 0 | 1 | 2 | 3 | 0’ |
    444+======+===+===+===+===+====+
    445| Seq  | E | B | C | D |    |
    446+------+---+---+---+---+----+
    447| Prio | 4 | 7 | 6 | 5 |    |
    448+------+---+---+---+---+----+
    449|      |   | T |   |   | H  |
    450+------+---+---+---+---+----+
    451
    452|
    453
    454.. kernel-figure:: fsm_txt_buffer_user.svg
    455
    456   TX Buffer states with possible transitions
    457
    458.. _subsec:ctucanfd:txtimestamp:
    459
    460Timestamping TX frames
    461^^^^^^^^^^^^^^^^^^^^^^
    462
    463When submitting a frame to a TX buffer, one may specify the timestamp at
    464which the frame should be transmitted. The frame transmission may start
    465later, but not sooner. Note that the timestamp does not participate in
    466buffer prioritization – that is decided solely by the mechanism
    467described above.
    468
    469Support for time-based packet transmission was recently merged to Linux
    470v4.19 `Time-based packet transmission <https://lwn.net/Articles/748879/>`_,
    471but it remains yet to be researched
    472whether this functionality will be practical for CAN.
    473
    474Also similarly to retrieving the timestamp of RX frames, the core
    475supports retrieving the timestamp of TX frames – that is the time when
    476the frame was successfully delivered. The particulars are very similar
    477to timestamping RX frames and are described in .
    478
    479Handling RX buffer overrun
    480~~~~~~~~~~~~~~~~~~~~~~~~~~
    481
    482When a received frame does no more fit into the hardware RX FIFO in its
    483entirety, RX FIFO overrun flag (STATUS[DOR]) is set and Data Overrun
    484Interrupt (DOI) is triggered. When servicing the interrupt, care must be
    485taken first to clear the DOR flag (via COMMAND[CDO]) and after that
    486clear the DOI interrupt flag. Otherwise, the interrupt would be
    487immediately [7]_ rearmed.
    488
    489**Note**: During development, it was discussed whether the internal HW
    490pipelining cannot disrupt this clear sequence and whether an additional
    491dummy cycle is necessary between clearing the flag and the interrupt. On
    492the Avalon interface, it indeed proved to be the case, but APB being
    493safe because it uses 2-cycle transactions. Essentially, the DOR flag
    494would be cleared, but DOI register’s Preset input would still be high
    495the cycle when the DOI clear request would also be applied (by setting
    496the register’s Reset input high). As Set had higher priority than Reset,
    497the DOI flag would not be reset. This has been already fixed by swapping
    498the Set/Reset priority (see issue #187).
    499
    500Reporting Error Passive and Bus Off conditions
    501~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    502
    503It may be desirable to report when the node reaches *Error Passive*,
    504*Error Warning*, and *Bus Off* conditions. The driver is notified about
    505error state change by an interrupt (EPI, EWLI), and then proceeds to
    506determine the core’s error state by reading its error counters.
    507
    508There is, however, a slight race condition here – there is a delay
    509between the time when the state transition occurs (and the interrupt is
    510triggered) and when the error counters are read. When EPI is received,
    511the node may be either *Error Passive* or *Bus Off*. If the node goes
    512*Bus Off*, it obviously remains in the state until it is reset.
    513Otherwise, the node is *or was* *Error Passive*. However, it may happen
    514that the read state is *Error Warning* or even *Error Active*. It may be
    515unclear whether and what exactly to report in that case, but I
    516personally entertain the idea that the past error condition should still
    517be reported. Similarly, when EWLI is received but the state is later
    518detected to be *Error Passive*, *Error Passive* should be reported.
    519
    520
    521CTU CAN FD Driver Sources Reference
    522-----------------------------------
    523
    524.. kernel-doc:: drivers/net/can/ctucanfd/ctucanfd.h
    525   :internal:
    526
    527.. kernel-doc:: drivers/net/can/ctucanfd/ctucanfd_base.c
    528   :internal:
    529
    530.. kernel-doc:: drivers/net/can/ctucanfd/ctucanfd_pci.c
    531   :internal:
    532
    533.. kernel-doc:: drivers/net/can/ctucanfd/ctucanfd_platform.c
    534   :internal:
    535
    536CTU CAN FD IP Core and Driver Development Acknowledgment
    537---------------------------------------------------------
    538
    539* Odrej Ille <ondrej.ille@gmail.com>
    540
    541  * started the project as student at Department of Measurement, FEE, CTU
    542  * invested great amount of personal time and enthusiasm to the project over years
    543  * worked on more funded tasks
    544
    545* `Department of Measurement <https://meas.fel.cvut.cz/>`_,
    546  `Faculty of Electrical Engineering <http://www.fel.cvut.cz/en/>`_,
    547  `Czech Technical University <https://www.cvut.cz/en>`_
    548
    549  * is the main investor into the project over many years
    550  * uses project in their CAN/CAN FD diagnostics framework for `Skoda Auto <https://www.skoda-auto.cz/>`_
    551
    552* `Digiteq Automotive <https://www.digiteqautomotive.com/en>`_
    553
    554  * funding of the project CAN FD Open Cores Support Linux Kernel Based Systems
    555  * negotiated and paid CTU to allow public access to the project
    556  * provided additional funding of the work
    557
    558* `Department of Control Engineering <https://control.fel.cvut.cz/en>`_,
    559  `Faculty of Electrical Engineering <http://www.fel.cvut.cz/en/>`_,
    560  `Czech Technical University <https://www.cvut.cz/en>`_
    561
    562  * solving the project CAN FD Open Cores Support Linux Kernel Based Systems
    563  * providing GitLab management
    564  * virtual servers and computational power for continuous integration
    565  * providing hardware for HIL continuous integration tests
    566
    567* `PiKRON Ltd. <http://pikron.com/>`_
    568
    569  * minor funding to initiate preparation of the project open-sourcing
    570
    571* Petr Porazil <porazil@pikron.com>
    572
    573  * design of PCIe transceiver addon board and assembly of boards
    574  * design and assembly of MZ_APO baseboard for MicroZed/Zynq based system
    575
    576* Martin Jerabek <martin.jerabek01@gmail.com>
    577
    578  * Linux driver development
    579  * continuous integration platform architect and GHDL updates
    580  * theses `Open-source and Open-hardware CAN FD Protocol Support <https://dspace.cvut.cz/bitstream/handle/10467/80366/F3-DP-2019-Jerabek-Martin-Jerabek-thesis-2019-canfd.pdf>`_
    581
    582* Jiri Novak <jnovak@fel.cvut.cz>
    583
    584  * project initiation, management and use at Department of Measurement, FEE, CTU
    585
    586* Pavel Pisa <pisa@cmp.felk.cvut.cz>
    587
    588  * initiate open-sourcing, project coordination, management at Department of Control Engineering, FEE, CTU
    589
    590* Jaroslav Beran<jara.beran@gmail.com>
    591
    592 * system integration for Intel SoC, core and driver testing and updates
    593
    594* Carsten Emde (`OSADL <https://www.osadl.org/>`_)
    595
    596 * provided OSADL expertise to discuss IP core licensing
    597 * pointed to possible deadlock for LGPL and CAN bus possible patent case which lead to relicense IP core design to BSD like license
    598
    599* Reiner Zitzmann and Holger Zeltwanger (`CAN in Automation <https://www.can-cia.org/>`_)
    600
    601 * provided suggestions and help to inform community about the project and invited us to events focused on CAN bus future development directions
    602
    603* Jan Charvat
    604
    605 * implemented CTU CAN FD functional model for QEMU which has been integrated into QEMU mainline (`docs/system/devices/can.rst <https://www.qemu.org/docs/master/system/devices/can.html>`_)
    606 * Bachelor theses Model of CAN FD Communication Controller for QEMU Emulator
    607
    608Notes
    609-----
    610
    611
    612.. [1]
    613   Other buses have their own specific driver interface to set up the
    614   device.
    615
    616.. [2]
    617   Not to be mistaken with CAN Error Frame. This is a ``can_frame`` with
    618   ``CAN_ERR_FLAG`` set and some error info in its ``data`` field.
    619
    620.. [3]
    621   Available in CTU CAN FD repository
    622   `<https://gitlab.fel.cvut.cz/canbus/ctucanfd_ip_core>`_
    623
    624.. [4]
    625   As is done in the low-level driver functions
    626   ``ctucan_hw_set_nom_bittiming`` and
    627   ``ctucan_hw_set_data_bittiming``.
    628
    629.. [5]
    630   At the time of writing this thesis, option 1 is still being used and
    631   the modification is queued in gitlab issue #222
    632
    633.. [6]
    634   Strictly speaking, multiple CAN TX queues are supported since v4.19
    635   `can: enable multi-queue for SocketCAN devices <https://lore.kernel.org/patchwork/patch/913526/>`_ but no mainline driver is using
    636   them yet.
    637
    638.. [7]
    639   Or rather in the next clock cycle