Drivers for PCIe-Powered AI Infrastructure

ai infrastructure

PCI devices, including AI accelerators, contribute significantly to the development and adoption of AI technologies. The PCIe interface provides high-speed bandwidth for efficient data transfers. When combined with AI computing power, the potential for innovation and development is limitless. In these kinds of platforms, the design must tackle bottlenecks and exhaustive processes.

Enhancing AI Infrastructure and Application Drivers

Developers must take a holistic approach to system optimization. Ensuring seamless integration across hardware, software, and infrastructure is key to delivering high-performance AI at the edge. This includes:

  • System Optimization: Optimize hardware, software, and infrastructure to avoid bottlenecks and manage large data volumes efficiently.
  • Appropriate Hardware: Choose accelerators and components that align with specific workload demands for optimal performance.
  • Robust Data Pipelines: Design data workflows that can handle continuous and large-scale data processing.
  • Ensuring Driver Compatibility: Use optimized drivers to facilitate smooth integration and ensure systems can meet the demands of AI workloads and peripherals.

The driver plays a crucial role in the system architecture. A well-designed driver alleviates the workload and minimizes points of interaction. In critical operations such as real-time AI performance analysis or time-sensitive tasks, an efficient driver can be the determining factor between success and failure.

Here are a few key steps to enhance your driver’s impact on AI performance:

Implementing Direct Memory Access (DMA): Our first step is to ensure that data flows through as few channels as possible. For example, using Direct Memory Access (DMA) allows data to transfer with little to no involvement from the CPU. This reduces the CPU’s workload and significantly improves transfer speed.

Using DMA with the Interrupt Method: With interrupt-driven Direct Memory Access, the CPU is only notified when the data transfer is complete or when an issue occurs. The DMA controller manages data transfer between the peripheral and memory, while the CPU remains free to perform other tasks. Once the transfer is finished, the DMA prompts the CPU to handle any post-transfer operations. This approach minimizes CPU involvement during the transfer, improving overall system efficiency while ensuring timely updates when the process concludes.

Creating a Shared DMA Buffer: Using a shared buffer speeds up data movement and reduces latency. This approach not only accelerates data movement but also enables seamless, low-latency access for critical components and peripherals across multiple processes. In complex systems where scalability and adaptability are paramount, expediting data transfer through DMA buffers is critical for overall system performance

Simplifying the Driver Development Process

Driver development can be just as complex as building AI infrastructure. It involves integrating hardware, software, and memory management for seamless operation. WinDriver simplifies this process by offering a comprehensive toolkit that developers can use to streamline driver development. This toolkit includes everything you need to create optimized drivers without dealing with the complexities of low-level system operations streamlining and accelerating the development process.

Key Features of WinDriver

  • DMA Support: WinDriver includes a robust DMA mechanism that supports both Scatter-Gather and Contiguous DMA operations. It ensures efficient memory management and optimal system performance by allocating, locking, and releasing memory automatically.
  • Interrupt Handler Functions: WinDriver provides powerful interrupt handler functions, making it easier to manage various interrupt types. It includes sample implementations to handle the listening, acknowledgment, callback/transfer commands, and efficiently closing the interrupt handing process.
  • DMA Shared Buffer: WinDriver supports the sharing of buffers, including contiguous DMA and kernel buffers between multiple processes. This eliminates unnecessary buffer copying, improves data transfer efficiency, and makes inter-process communication (IPC) more reliable.

If you’d like to preview any of the above features please feel free to register for a free trial period.

IOMMU vs. DMA

Understanding the Data Transfer Powerhouse Duo

In modern computing environments, efficient data transfer between hardware devices and system memory is crucial for optimal performance. Two key technologies utilized to facilitate this data movement are the Input-Output Memory Management Unit (IOMMU) and Direct Memory Access (DMA). While both mechanisms serve the same fundamental purpose of enabling efficient data transfers, they operate differently and cater to distinct use cases. In this article, we delve into the technical nuances that differentiate IOMMU and direct DMA.

Understanding Direct Memory Access (DMA)

Direct Memory Access (DMA) is a mechanism that allows hardware peripherals to transfer data directly to and from the system’s memory without involving the CPU. DMA is commonly employed in scenarios where frequent data transfers between devices and memory are required, such as disk I/O operations or network communication.

In a typical DMA operation, the peripheral device initiates the data transfer by sending a DMA request to the DMA controller. The DMA controller, in turn, coordinates the transfer by temporarily taking control of the system bus and accessing the memory directly. This bypasses the CPU, thereby improving overall system performance by reducing CPU overhead.

The Role of IOMMU

On the other hand, the Input-Output Memory Management Unit (IOMMU) is a hardware component responsible for managing memory access for I/O devices. It acts as a bridge between the physical addresses used by hardware devices and the virtual addresses used by the CPU.

One of the primary functions of the IOMMU is to provide memory protection and isolation for I/O devices. By mapping device-visible physical addresses to system memory, the IOMMU ensures that each device can only access its allocated memory regions, preventing unauthorized access and enhancing system security.

Additionally, the IOMMU enables address translation and remapping, allowing devices to access memory beyond the system’s physical address space. This feature is particularly useful in systems with large amounts of memory or when utilizing virtualization technologies, as it allows for efficient memory management and resource allocation.

Key Differences and Use Cases

The main difference between IOMMU and direct DMA lies in their approach to memory access and management:

Address Translation: In direct DMA, the device accesses memory using physical addresses directly, without translation. In contrast, the IOMMU performs address translation, mapping device-visible physical addresses to system memory addresses, thereby providing memory protection and isolation.

Memory Management: Direct DMA transfers data between devices and memory without CPU intervention, optimizing performance. Conversely, the IOMMU adds an additional layer of complexity by managing memory mappings and access permissions, which may introduce some overhead.

Use Cases: Direct DMA is well-suited for scenarios requiring high-performance data transfers, such as disk I/O or network communication, where minimizing CPU overhead is critical. On the other hand, the IOMMU is essential for ensuring memory protection, isolation, and address translation in systems with multiple I/O devices or in virtualized environments.

While both IOMMU and direct DMA serve the common goal of facilitating efficient data transfers between hardware devices and system memory, they operate using different mechanisms and cater to distinct use cases. Direct DMA prioritizes performance by allowing devices to access memory directly without CPU intervention, whereas the IOMMU adds a layer of memory management and protection, essential for ensuring system security and resource isolation. 

In our research, we found that performing DMA utilizing IOMMU components dramatically enhances the transfer rate. To utilize IOMMU components for DMA, enable the IOMMU on your system’s bios. The operating system will manage the DMA with the selected hardware. 

Optimizing Data Transfer

Exploring Interrupt and Polling Mechanisms in DMA

In computer architecture, optimizing data transfer is essential for system performance. Direct Memory Access (DMA) controllers enable high-speed data movement between peripherals and memory without constant CPU involvement. A key decision in configuring DMA transfers is selecting the transfer flow sequence.

Polling Method DMA

Polling-based DMA requires the CPU to repeatedly check the DMA controller’s status to determine when a transfer is complete. The CPU polls the DMA controller at set intervals, usually via a status register or memory location. Once the transfer finishes, the CPU resumes its tasks.

Polling-based DMA is simple to implement since it does not require complex interrupt handling. It also offers determinism, as the CPU controls when polling occurs, which benefits real-time systems. Additionally, it incurs lower overhead than interrupts because it avoids context switching.

However, polling keeps the CPU occupied, reducing availability for other tasks. In systems with frequent DMA operations, this can hurt performance. Polling also introduces latency, as the CPU may not immediately detect completed transfers, delaying subsequent tasks. Infrequent or unpredictable data transfers further waste CPU cycles, leading to inefficiency.

Interrupt – Driven DMA

Interrupt-driven DMA notifies the CPU via an interrupt request (IRQ) when a transfer is complete. The CPU pauses its current task to handle the interrupt.

This method reduces CPU overhead, allowing it to handle other tasks while waiting for data transfers. It also minimizes latency by immediately signaling the CPU upon completion, making it ideal for time-sensitive applications. Additionally, by decoupling the CPU from data transfers, it enhances multitasking and system flexibility.

However, interrupt-driven DMA requires hardware support for interrupt handling, increasing system complexity and cost. In systems with multiple interrupts, priority inversion may occur if a low-priority DMA transfer delays higher-priority tasks. Handling interrupts also incurs overhead from context switching, which can affect performance in high-frequency scenarios.

Comparison

When comparing the two methods, interrupt-driven DMA generally offers better performance and responsiveness compared to polling-based DMA, especially in systems with high data transfer rates or stringent latency requirements. Polling-based DMA is simpler to implement but may not be suitable for high-performance or real-time systems. Interrupt-driven DMA, while more complex, offers greater flexibility and efficiency.

In terms of resource utilization, polling-based DMA ties up the CPU, leading to inefficient resource utilization, whereas interrupt-driven DMA allows the CPU to perform other tasks concurrently, improving overall system efficiency.

Both polling-based DMA and interrupt-driven DMA have their advantages and disadvantages, making them suitable for different use cases. The choice between these methods depends on the specific requirements of the system and the trade-offs between simplicity, performance, and flexibility.

Buffer Size

The buffer size plays a crucial role in DMA (Direct Memory Access) transfers as it directly impacts the efficiency, performance, and resource utilization of the system. The buffer size determines the amount of data that can be transferred in each DMA operation before the CPU is involved. 

A larger buffer size allows for fewer DMA transactions, reducing the overhead associated with DMA setup and teardown, and maximizing the throughput of the transfer. However, an excessively large buffer size can lead to wasted memory resources and increased latency if the DMA controller must wait for the buffer to fill before initiating a transfer. 

Conversely, a smaller buffer size may result in more frequent DMA transactions, potentially increasing CPU overhead and reducing overall system performance. Therefore, selecting an optimal buffer size is essential to achieve efficient data transfer, minimize latency, and maximize system throughput in DMA-based applications.

In a research we performed <Link>, we found that each DMA method has its sweet spot before it stagnates or gets to the point of diminishing performance. The x-axis shows the buffer size and the y-axis shows the transfer rate in MB/sec. The transfers were tested for 10 seconds each.

DMA Performance Comparison Across DMA Methods and Types

Direct Memory Access is a critical technology for enhancing data transfer between devices and a computer’s main memory. This research examines the performance of various DMA implementations across multiple Linux kernel versions. By analyzing data transfer rates with different buffer sizes and DMA types (IOMMU, Direct, SMMU), we aim to identify the most efficient DMA configuration for this hardware setup. The findings provide insights into how these factors influence performance and contribute to a deeper understanding of DMA behavior across different kernels and buffer sizes.

While the specific results here apply to the tested devices and configurations, this research serves as an informative exploration of DMA performance. By studying how buffer size and DMA type affect transfer rates across Linux kernel versions, we hope to inspire further investigations into optimizing data transfer speeds on different hardware setups.

View our webinar on DMA 101: The Essential Webinar for Developers

Technical Specifications

Hardware:

  • Card: Bittware XUPP3R UltraScale+ card running an XDMA IP
  • X64 Machine: ASUS, Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz, 24Gi RAM
  • ARM64 Machine: NVIDIA AGX Xavier, ARMv8 Processor rev 0 (v8l) 2.2GHz, 16Gi RAM

Operating system

  • X64: Ubuntu 18.04, Kernels 4.15, 5.3 / Ubuntu 22.04, Kernels 5.15, 6.0
  • ARM64: Ubuntu 20.04, Kernel 5.10

Disclaimer:

The results in this analysis are based on specific hardware and software configurations. Performance variations between different DMA types (IOMMU, Direct, SMMU) and buffer sizes depend on system configuration and Linux kernel versions. Since results may not be universally applicable, testing on the target hardware and software setup is recommended to determine the optimal DMA type and buffer size.

Methodology

  • Using the WinDriver XDMA sample (xdma_diag), sent buffers of increasing size to and from the device (Host to Card and Card to Host).
  • Measured the amount of information transferred in 10 seconds. 
  • Tested both Poll and Interrupt transfers. (Method)
  • Compared IOMMU DMA against Direct. (Type)

Main Conclusion

  • The IOMMU DMA method is consistently faster than the direct method.
  • The kernel version matters, newer kernel versions increase the transfer rate and DMA differences between methods. 
  • Larger buffer ≠ faster DMA. In most cases, there is a point of diminishing performance, or a plateau in transfer rate is reached. 
  • The ARM machine lagged behind the competition with a small buffer size but caught up as the buffer size was increased. 

**The X-axis represents the buffer size. The Y-axis represents the amount of data (in MB/Sec)  transferred in 10 seconds.  

Test 1 – C2H, Interrupt DMA method, MMU vs. Direct

  • The non-ARM DMAs reach a point of diminishing performance, where an increased buffer size does not improve the data transfer rate. 
  • The new IOMMU and SMMU designs outperform the original Direct design. In nearly all the tests, the amount of data transferred using IOMMU or SMMU is higher than using Direct for buffers of the same size. 
  • Newer kernel versions refine the difference between the methods. 

Device to Host Interrupts

Test 2 – C2H, Polling DMA method, MMU vs. Direct

  • In all DMA version types, the polling method transfers start at a much higher transfer rate than on the interrupts method.  
  • On older kernel versions (4.14 and 5.3) the transfers reach a plateau in transfer speed around a 1MB buffer. In newer kernel versions the IOMMU transfer rate significantly improves, but also reaches a point of diminishing performance.

Device to Host Polls

Test 3 – H2C, Interrupt DMA method, MMU vs. Direct

  • The transfer rate in the non-ARM DMA types starts from the same 64KB buffer size. On older kernel versions (4.14 and 5.3) the transfer rate remains relatively close as buffer size grows between all types. In newer kernel versions the IOMMU transfer rate significantly improves, alas maintaining the performance pattern.

Host to Device Interrupts

Test 4 – H2C, Polling DMA method, MMU vs. Direct

  • In all DMA version types, the polling method transfers start at a much higher transfer rate than on the interrupts method.  
  • The transfers reach a plateau in transfer speed around a 1MB buffer on all kernel versions. the IOMMU DMA type performance significantly improves In newer kernel versions.

Host to Device Polls

Please feel free to download WinDriver and register for a free trial period.

Scatter-Gather or Contiguous? Understanding DMA Types

In PCIe driver development, DMA (Direct Memory Access) facilitates data transfer between a PCIe device and system memory without involving the CPU. This transfer method significantly improves performance compared to CPU-mediated transfers, as it frees up the CPU for other tasks.

The basic DMA architecture includes 4 steps, defined and managed in the driver:

  1. Allocating a memory region accessible for DMA – This means the region (in the memory) that has specific properties that allow the device to directly read and write from it.
  2. Providing the physical address of the memory region to the device – The device translates this address into a format it understands for its DMA operations.
  3. Initiating the DMA transfer – The device uses its DMA controller to directly transfer data between its internal memory and the allocated memory region on the host system.
  4. Notifying when the transfer is complete – Freeing the allocated space, and processing the data or triggers further actions.

The memory allocation depends on various factors; the type of data transferred, device-memory compatibility, system architecture, and more. The actual free memory that can be allocated for the transfer is perhaps the most critical factor. Sometimes, there is not enough contiguous space to allocate to the DMA. 

Both contiguous and scatter-gather DMA (also: SG) are techniques used in PCIe driver development to efficiently transfer data between a device and system memory, but they differ in how they handle buffer allocation:

Contiguous DMA  – Pages a single segment of the memory, locking it to execute the transfer. Managing the transfer on the device side is easier, but it’s not always possible if sufficient contiguous memory isn’t allocated. This constraint is generally due to fragmentation.

Scatter-Gather DMA – Allocates multiple, potentially non-contiguous blocks of memory. The data fragments residing in memory are managed with a descriptor table, telling the device which data should go where. It is more complex to set up on the driver side as you need to manage the descriptor table.

So, which one is better?

As said, contiguous DMA is easier to program, debug and manage. However, SG DMA can be more effective when the requirements demand data scattered across multiple memory locations, in cases of large data transfers and when contiguous memory is scarce.

Regarding performance and speed, there is no straightforward answer. It depends on the device (and which DMA types the hardware supports), the memory (data patterns and available space) and the driver complexity. We recommend checking different methods, including DMA combined with polling and interrupts. Sometimes the results can be surprising. 

WinDriver, the leading cross-platform driver development toolkit for PCI/PCIe devices offers several tools making the development of both types of DMA easier. As a developer, you will be able to perform kernel operations (including DMA-related) from the user mode. Moreover, we offer an automated generated code based on settings defined in a GUI application (no-code). If you are new to this operation, WinDriver includes free samples of DMA run and tested for industry standard DMA IP Cores from companies such as Intel Altera, AMD Xilinx, Lattice, and more.

Want to learn more about WinDriver? Contact our team today.
Download WinDriver for free, and enjoy our 30-day trial.

Mastering Driver Development for High-Speed PCIe Devices

Nobody likes latency. The need for high speed and precise data transfer arises everywhere, from satellites orbiting space to data centers deep beneath the Earth’s surface. PCIe 7.0 is capable of transferring 128 GT/s, fulfilling the promise to double the I/O bandwidth every 3 years. 

One of the kernel developer’s goals is to utilize the hardware fully. It is not just about making everything work, but also maximizing the product’s potential. 

Optimize PCIe Devices: 4 Key Areas to Address

  • Use DMA – Perhaps the first thing we think of when it comes to data transfer speed is Direct Memory Access.
    DMA allows for direct data transfer between the device and memory, bypassing the CPU. This cuts down on latency and boosts throughput. Although it can be more challenging to set up a DMA mechanism, a properly implemented DMA significantly improves performance and reduces latency.
  • Implement Kernel Plugin – Kernel plugins are chunks of code (performing specific operations) that sit only in the kernel mode. The fact that they run without constantly context-switching with the user mode, makes their loading dynamic and operations faster. This not only affects the speed, but also demands less resources.
  • Optimize your code – A fine code can make all the difference, especially when there are a lot of “moving parts”. Coding an effective interrupt handler, managing multiple queues, optimizing memory access patterns, and other critical components can also enhance the device’s speed.
  • Update your OS version – The benefits of working with the latest version of an operating system do not stop at performance improvement. New versions often include critical security and other bug fixes, and better compatibility, support, and APIs. Research showed that newer versions enhance device performance.

Efficient data transfer with minimal latency is crucial across various applications. Kernel developers aiming to fully utilize their device potential must consider performance enhancement strategies like DMA, implementing Kernel Plugins, optimizing code, and working with an updated operating system. Ultimately contributing to enhanced device performance.

WinDriver, the most reliable driver development toolkit serving leading companies for more than 20 years, offers DMA, kernel plugins, and handlers API. Not just that, WinDriver comes with a sample folder, including tested interrupt handlers, DMA, and kernel plugin, and code samples for DMA IPs by AMD/Xilinx, Intel/Altera, and Lattice among others. Regarding coding, WinDriver offers no-code features, including generating code for I/O operations and predefined actions set in the DriverWizard. 

Want to learn more about WinDriver? Contact our team today.
Download WinDriver for free, and enjoy our 30-day trial.

Download WinDriver Free 30 Days Trial

Please fill the form below to download a fully featured WinDriver evaluation

Software License Agreement of WinDriver (TM) Version v16.3.0
© Jungo Connectivity Ltd. 2024 All Rights Reserved

IMPORTANT – READ CAREFULLY: THIS SOFTWARE LICENSE AGREEMENT (“AGREEMENT”) IS A LEGAL AGREEMENT BETWEEN YOU AND JUNGO CONNECTIVITY LTD. (“JUNGO”), FOR THE WINDRIVER SOFTWARE PRODUCT ACCOMPANYING THIS LICENSE (THE “SOFTWARE”). BY INSTALLING, COPYING OR OTHERWISE USING THE SOFTWARE, YOU AGREE TO BE LEGALLY BOUND BY THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT, DO NOT INSTALL, COPY, OR OTHERWISE USE THE SOFTWARE.

1.  OWNERSHIP OF THE SOFTWARE. All right, title, and interest in and to the Software, including associated intellectual property rights, of any sort and/or kind, are and shall remain solely with Jungo and its licensors, and may be protected by copyright, trademark, patent and trade secret law and international treaties. This Agreement does not convey to you an interest in or to the Software, but only a limited, non-transferable, non-sublicensable, non-exclusive, and revocable right of use, in accordance with the terms of this Agreement. You may not remove any proprietary notices and/or any legends from the Software, in whole or in part.

2.  GRANT OF LICENSE. Jungo hereby grants you a personal, non-exclusive, nontransferable, and non-sublicensable node-locked and time limited license to use the Software.

Individuals: Jungo grants you, as an individual, a personal, non-exclusive, “single-user” license to use the Software on a single computer, in the manner provided below, at the site for which the license was given.

Entities: If you are an entity, Jungo grants you the right to designate one individual within your organization (and only one) to have the right to use the Software on a single computer, in the manner provided below, at the site for which the license was given.

License Scope: A single user license allows usage of WinDriver and redistribution of certain components (as defined below) within a single end product SKU, for a single device (identified by its VID/PID (USB) or VID/DID (PCI)), and without SDK/API capabilities.  If you need extended license or distribution rights, please contact Jungo.

3.  EVALUATION LICENSE. If you have not yet paid license fees for the use of the Software, then Jungo hereby grants you a personal, non-exclusive, non-transferable and non-sublicensable license to internally use the Software for evaluation purposes only, for a period of 30 days (the “Evaluation License”). If, after the expiration of the Evaluation License, you wish to continue using the Software and accompanying written materials, you may do so by remitting the required payment to Jungo, and you will then receive a registration code and a license string that will permit you to use the Software on a single computer under one of the license schemes specified in Section 2 above.

4.  OPEN SOURCE. The Software includes certain files that are subject to open source licenses. These files are identified in their header files (“Open Source Files”). You must use the Open Source Files in accordance with the terms of their respective licenses.
In the event of any contradiction between the terms of this Agreement, and the terms of the open source license accompanying a certain Open Source File, the terms of the latter shall prevail, with regard to the said Open Source File.

RESTRICTIONS ON USE AND TRANSFER

5.  Distribution of Files:

(a) You may not distribute, or otherwise transfer or assign, any portion of the Software, including any of the headers or source files that are included in the Software, unless otherwise expressly permitted in this Agreement, subject to the provisions of Section 4 above.

(b) Subject to your full and continued compliance with the terms of this Agreement, including the ongoing payment of annual license fees, you may distribute the following files:

Windows:
windrvr1630.sys
windrvr1630.inf
wd1630.cat
wdapi1630.dll
wdapi_dotnet1630.dll
wdapi_java1630.dll wdapi_java1600.jar
wdreg.exe
difxapi.dll

Windows_CE:
windrvr1220.dll
wdapi1220.dll

Linux:
windrvr_gcc_v2.a windrvr_gcc_v3.a windrvr_gcc_v3_regparm.a
kp_linux_gcc_v2.o kp_linux_gcc_v3.o kp_linux_gcc_v3_regparm.o
libwdapi1630.so
libwdapi_java1630.so wdapi_java1600.java
kp_wdapi1630_gcc_v2.a kp_wdapi1600_gcc_v3.a kp_wdapi1600_gcc_v3_regparm.a
linux_wrappers.c linux_wrappers.h wdusb_linux.c
wdusb_interface.h wd_ver.h linux_common.h windrvr.h windrvr_usb.h
wdreg
configure makefile.in
configure.wd makefile.wd.in makefile.wd.kbuild.in
configure.usb makefile.usb.in makefile.usb.kbuild.in
setup_inst_dir

(c) The files listed in Section 5.b above may be distributed only as part
of a complete application that you distribute under your organization name,
and only if they significantly contribute to the functionality of your
application. For avoidance of doubt, each organization distributing these
files as part of the organization products is required to have valid
license(s) under the organization name/VID, irrespective of the party who
actually performed the product development. Licenses granted to
subcontractors do not grant distribution or other rights to the
organizations for which they are developing.

(d) The distribution of the windrvr.h header file is permitted only on Linux.

(e) You may not modify the distributed files specified in Section 5.b of this Agreement.

(f) You may not distribute any header file that describes the WinDriver functions, or functions that call the WinDriver functions and have the same basic functionality as that of the WinDriver functions.

6.  The Software may not be used to develop a development product, an API, or any products, which will eventually be part of a development product or environment, without the written consent of Jungo and subject to additional fees and licensing terms.

7.  You may make printed copies of the written materials accompanying the Software, provided that only users bound by this license use them.

8.  You may not allow any third party to use the Software, grant access to the Software (or any portion thereof) to any third party, or otherwise make any commercial use of the Software, including without limitation, assign, distribute, sublicense, transfer, pledge, lease, rent, or share your rights in the Software or any of your rights under this Agreement, all whether or not for any consideration.

9.  You may not translate, reverse engineer, decompile, disassemble, reproduce, duplicate, copy, or otherwise disseminate all or any part of the Software, or extract source code from the object code of the Software.

10. Jungo reserves the right to revise, update, change, modify, add to, supplement, or delete any and all terms of this License Agreement; provided, however, that changes to this License Agreement will not be applied retroactively. Such changes will be effective with or without
prior notice to you. You can review the most current version of this License Agreement under the WinDriver download form page.

11.  You may not incorporate or link any open source software with any open source software part of the Software, or otherwise take any action which may cause the Software or any portion thereof to be subjected to the terms of the Free Software Foundation’s General Public License (GPL) or Lesser General Public License (LGPL), or of any other open source code license.

12.  DISCLAIMER OF WARRANTY. THIS SOFTWARE AND ITS ACCOMPANYING WRITTEN
MATERIALS ARE PROVIDED BY JUNGO “AS IS” WITHOUT ANY WARRANTY. ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT, ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED UNDER APPLICABLE LAW.

13.  NO LIABILITY. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO
EVENT SHALL JUNGO OR ITS LICENSORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, SAVINGS, IP INFRINGEMENT OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

14.  Governing Law. This Agreement and use of the Software are governed by the
laws of the State of Israel, regardless of its conflict of laws rules, and the
competent courts of the State of Israel shall have sole and exclusive
jurisdiction over any dispute under this Agreement or otherwise related to the
Software.

15.  Confidentiality. The Software, including any additional information
related thereto, contains confidential and proprietary information of Jungo.
Accordingly, you agree that you will not, nor allow any third party to,
disseminate, transfer, grant access to, or otherwise disclose to any third
party the Software or any part thereof or any other confidential or proprietary
information of Jungo provided in connection therewith. You will maintain all
copies of the Software and all related documentation in confidence.

16.  Termination and Effects of Termination. Jungo may terminate this Agreement
and the licenses granted to you hereunder at any time if you breach any of your
obligations hereunder, by issuance of written notice to such effect, addressed
to you at the address you provided in your registration form. Upon expiration
or other termination of this Agreement, the Licenses granted to you hereunder
shall immediately and automatically be canceled, and you will immediately
remove all copies of the Software from your computer(s) and cease any use
thereof.

17.  Contact Details. If you have any questions concerning this Agreement or wish to contact Jungo for any reason —

Web site: www.jungo.com
Email:    [email protected]

18.  U.S. GOVERNMENT RESTRICTED RIGHTS. The Software and documentation are provided with RESTRICTED RIGHTS.
Use, duplication, or disclosure by the Government is subject to restrictions set forth in subparagraph (c)(1) of The Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 or subparagraphs (c)(1)(ii) and (2) of Commercial Computer Software – Restricted Rights at 48 CFR 52.227-19, as applicable.

19. Automatic Renewal. The subscription shall be automatically renewed, unless Licensee notifies Licensor 30 days or more prior to the expiration date of the subscription, of its intent not to renew the subscription.

 

Accessibility Toolbar