Debug and Trace Support for NXP S32N55 Vehicle Processor

Debug and Trace Support for NXP S32N55 Vehicle Processor

Lauterbach’s TRACE32® development tools now support NXP® Semiconductors’ S32N55 vehicle super-integration processor for consolidation of real-time vehicle functions in software-defined vehicle (SDV) architectures. TRACE32® tools support includes simultaneous debugging of the processor cores as well as non-intrusive processor trace capture.

The S32N55 real-time super-integration processor combines high-performance real-time processing, a firewalled hardware security engine, and hardware isolation and virtualization for safe and secure integration of real-time vehicle functions. The SoC implements 16 split-lock Arm® Cortex®-R52 cores operating at up to 1.2 GHz, as well as two Arm® Cortex-M7 in lockstep configurations serving as system manager and communication manager.

Lauterbach’s TRACE32® development tools enable hardware-accelerated debugging and real-time tracing of all the Arm® processors and other cores that are implemented on the chip. TRACE32® tools consist of the universal PowerView debugging and tracing software as well as debug and trace accelerator modules. While Lauterbach’s intelligent PowerDebug modules provide the highest available download speeds and smallest response times for efficient debugging and test automation, PowerTrace real-time trace modules provide full insights of what the processors and other cores of the system are doing without impacting its real-time performance in any way. Trace analysis including code coverage measurements can support bringing embedded designs to market faster, safer, and more reliably than ever.

TRACE32® enables simultaneous debugging and tracing of the Arm® processors and other cores in a SoC; a unique capability to cover the whole system, regardless of whether the system is SMP (Symmetric Multiprocessing), AMP (Asymmetric Multiprocessing), or iAMP (Integrated Asymmetrical Multiprocessing). Lauterbach’s innovative iAMP debug and trace technology enables to debug multicore systems with identical CPU instruction sets in just one TRACE32® PowerView GUI.

“NXPs real-time super-integration processor S32N55 for software-defined vehicle architectures provides excellent computing performance as well as interfacing and functional safety features”, says Norbert Weiss, Managing Director at Lauterbach GmbH. “With the latest support of TRACE32®, we enable S32N55 customers to develop their applications with our market leading debug and trace tools right from the start.”

“NXP’s S32N55 processor is pioneering the super-integration of vehicle functions in central compute applications, enabling automakers to achieve significant cost and development efficiencies,” said Brian Carlson, Global Marketing Director for Automotive Processors at NXP. “Lauterbach’s TRACE32® development tools’ powerful debugging capabilities and insights offer complementary value, enabling developers to maximize their software performance.”

Lauterbach’s TRACE32® development tools enables developers of automotive SDV architectures to evolve their applications based on S32N55 SoCs even faster and easier.

About LAUTERBACH 
Lauterbach is the leading manufacturer of cutting-edge development tools for embedded systems with more than 45 years of experience. It is an international, well-established company, serving customers all over the world, partnering with all semiconductor manufacturers and growing steadily.

XJTAG Shows the Benefits of Boundary Scan at Evertiq Expo, Malmö

XJTAG Shows the Benefits of Boundary Scan at Evertiq Expo, Malmö

XJTAG®, a leader in JTAG boundary scan products, will be presenting a talk entitled “What is JTAG and how can JTAG help me?” at Evertiq Expo in Malmö on 23rd of May 2024 as part of the launch of version 4.0 of the XJTAG software suite. The company will also be manning a booth along with its distributor partner, Nohau Solutions, at the conference.

XJTAG 4.0 contains a number of improvements to the software, including Optimised Scans which allow different JTAG chains on a board to be run simultaneously at different clock frequencies, allowing your tests to run at their full potential, rather than being restricted by a few slower devices.

Simon Payne, XJTAG CEO said, “All FPGAs have boundary scan built in, but many engineers don’t realise how it can help them. Evertiq Expo is a great opportunity to demonstrate how the board’s JTAG connection allows engineers to use the FPGA’s boundary scan capabilities to test their board.”

Evertiq started out as a magazine covering electronics news and developments with an initial focus on the Swedish market and now holds a number of tradeshows around Europe under the banner of the Evertiq Expo. The Malmö show is the flagship expo, being set in one of Sweden’s fastest growing regions for the industry, including startups and universities.

This is a free show allowing engineers to meet their current and potential suppliers in the electronics design and manufacturing industries. The formal presentations during the day and informal discussions at the venue’s booths and over lunch or coffee give engineers the opportunity to learn from industry experts like XJTAG.

Tommaso De Vivo, XJTAG’s Vice President Business Development, EMEA, will be presenting at Evertiq Expo as well as being available at exhibition booth #63 for 1-to-1 discussions. He said, “I’ll be explaining what boundary scan is and how it allows an FPGA’s pins to be turned into virtual test points that can be read and controlled. I’ll show you how that can be used to test the board for assembly faults and to perform accelerated programming.”

One of the biggest problems with testing modern high-density PCBAs comes from the lack of physical access to points in the circuit caused by shrinking board area and the use of advanced IC packages such as BGAs. Tommaso De Vivo said, “The beauty of using boundary scan to test the board is that the reduced level of physical access no longer matters. And because you don’t need to configure the FPGA or run any code on the board, you can also use it to find out what’s wrong on boards that won’t boot.”

XJTAG’s tools provide an easy-to-use way to make the most of an FPGA’s boundary scan capabilities. Boundary scan is used by many engineers in R&D, test, and manufacturing across all industry sectors. It assists them with board bring-up as well as test and debug, and having an FPGA on the board also allows for accelerated programming of memories.

About JTAG

JTAG is an IEEE standard that was developed to address the difficulties of testing circuits that use packaging technologies such as Ball Grid Arrays and Chip Scale Packages, where solder connections aren’t accessible to traditional bed-of-nails testers. Although JTAG has since become popular for processor debug and for programming FPGAs and CPLDs, they only make use of the standard’s communications protocol. The full benefit of the JTAG standard comes from its introduction of boundary scan techniques for testing and debugging assembled boards; XJTAG’s tools give you an easy way to use those capabilities.

Cantata Hybrid – Bringing Unique Safety Standards Compliance for GoogleTest Suites

Cantata Hybrid – Bringing Unique Safety Standards Compliance for GoogleTest Suites

Cantata Hybrid enables the execution of tests by utilizing non-Cantata test suites, such as GoogleTest® and GoogleMock®, as input sources. This capability allows the generation of Cantata test results evidence, seamlessly combined with code coverage data obtained from a certified unit test tool to comply with all major safety-critical standards.

This specialized subset of Cantata is a cost-effective alternative that allows developers to run existing GoogleTest suites to generate test results evidence and code coverage from a certified unit test tool.

Key features of Cantata Hybrid include:

  • Certified for ISO 26262, DO-178C/DO-330, IEC 61508 and other safety standards
  • No need to rewrite tests or learn new tools
  • Tests run on host/target with coverage up to MC/DC level
  • Cost-effective alternative to expensive tool qualification
  • Integrates with other QA Systems certified static and dynamic testing tools

Cantata Hybrid bridges the gap between open-source testing and safety-critical software development, enabling you to achieve functional safety compliance with your existing Google test.

Rust Development Platform Debug Support for Infineon AURIX™

Rust Development Platform Debug Support for Infineon AURIX™

Lauterbach’s TRACE32® development tools now also support the HighTec Rust Compiler, tailored for Infineon AURIX™ TC3x and TC4x microcontrollers. The debugging of compiled Rust programs is therefore not only possible in the machine code, but also at source code level.

Rust is a multi-paradigm system programming language that was developed by the open source community with the aim, among other things, of avoiding program errors that lead to memory access errors or buffer overflows and thus possibly also to security vulnerabilities. The HighTec Rust Compiler delivers the full range of Rust language features, including memory safety, concurrency, and interoperability, for applications with safe, secure, high-performance, and rapidly deployable requirements.

Lauterbach’s TRACE32® enables hardware-accelerated debugging and real-time tracing of Rust code for all the TriCore and other cores like PPU and GTM implemented in AURIX™ TC3x and TC4x, a unique capability to cover the whole system. TRACE32® tools consist of the universal PowerView debugging and tracing software as well as debug and trace accelerator modules. While Lauterbach’s intelligent PowerDebug modules provide the highest available download speeds and smallest response times for efficient debugging and test automation, PowerTrace real-time trace modules provide full insights into what the CPUs and other cores of an AURIX™ system are doing without impacting its real-time performance in any way. Thanks to Lauterbach’s leading hypervisor and OS awareness technology, even virtualized environments can be examined safely and without restrictions. Trace analysis including code coverage measurements can support bringing embedded designs to market faster, safer, and more reliably than ever.

“Rust is a programming language that offers security, high performance, and ease of use”, says Norbert Weiss, Managing Director at Lauterbach GmbH. “With the support of our market-leading TRACE32® debug and trace tools for the HighTec Rust Compiler, embedded developers can now take advantage of Rust for their AURIX™-based projects.”

“The HighTec Rust Compiler for AURIX™ TC4x and TC3x utilizes the advanced open-source LLVM technology to leverage the full range of Rust features, including memory safety, concurrency, and interoperability, for safe and secure high-performance applications”, explains Mario Cupelli, CTO at HighTec EDV-Systeme. “We are very pleased to offer together with our long-term partner Lauterbach’s TRACE32® a leading solution for the development, debugging, tracing, and deployment of safe and secure embedded applications written in Rust and C/C++.”

Together with HighTec’s Rust Development Platform for Rust, Lauterbach’s TRACE32® enables developers of embedded devices to evolve Rust applications based on AURIX™ TC3x and TC4x microcontrollers, even faster and easier. 

CodeSonar language coverage now includes Kotlin, Python, Go, Rust, JavaScript, and TypeScript

CodeSonar V8.1 -CodeSonar language coverage now includes Kotlin, Python, Go, Rust, JavaScript, and TypeScript

CodeSecure announced a major new release for CodeSonar. CodeSonar 8.1 extends a developer centric approach for product security to include language support for Kotlin, Python, Go, Rust, JavaScript, and TypeScript. In addition to C/C++, Java and C#, CodeSonar now includes the emerging security centric embedded languages as well as modern web centric languages to cover end-to-end application development all under one SAST application. Numerous high profile product exploits have driven significant changes in how product development teams approach securing their code.

These DevSecOps trends include:

  • Making SAST a developer-centric solution to address security upfront
  • While minimizing disruption to the workflow
  • Managing geographically dispersed development teams
  • Single SAST platform solution for consistent metrics, reporting, and vulnerability management

This evolution enables developers to leverage CodeSonar’s advanced analysis capabilities across a diverse range of projects and technologies.

Operating Systems: Whether you’re developing for Windows, Linux, a real time operating system, or bare metal Better phrase than bare metal? CodeSonar ensures compatibility across various platforms.

Compilers: CodeSonar supports more than 90 compilers, including clang, GCC, Microsoft, IAR, Tasking, QNX, WindRiver. CodeSonar adapts to the language of your choice, providing comprehensive analysis in C/C++, Java, C#, Kotlin, Python, Go, Rust, JavaScript, and TypeScript.

Checkers: With hundreds of built-in checkers, CodeSonar examines code for potential vulnerabilities, coding errors, and compliance violations. From memory leaks to buffer overflows, CodeSonar’s advanced static analysis capabilities help identify issues early in the development cycle, saving time and resources in the long run. Whether you prefer cloud-based solutions or on-premises deployment, or fully air-gapped environments, CodeSonar offers flexible host platform options to suit your needs.

Integrations: CodeSonar seamlessly integrates with popular development products and CI/CD pipelines, streamlining the code review and deployment process. From IDE plugins to Jenkins GitHub and GitLab integrations, CodeSonar fits seamlessly into your existing toolchain, enhancing developer productivity and collaboration.

Coding Standards: CodeSonar helps organizations adhere to regulatory requirements such as MISRA, CERT, and CWE, ensuring code quality and security at every stage of the development lifecycle.

Deployment Models: Support for on-premises deployment for enhanced control and security or a cloud-based solution for scalability and accessibility,

Koenigsegg Unleashes Its Potential With Lauterbach

Koenigsegg Unleashes Its Potential With Lauterbach

Established in 1994, Koenigsegg is renowned for crafting luxury cars and cutting-edge vehicles, consistently leading in automotive innovation.

We explore their journey of integrating Lauterbach TRACE32, a vital component of their journey to elevate software development and requirements management processes. Additionally, we dive into their latest groundbreaking project, the world’s first 4-seater Megacar, a testament to their ongoing commitment to innovation.

The top 10 points why ALM is important

The top 10 points why ALM is important

Why is an ALM tool important in todays development of embedded products

Nohau have during the past more than 25 years preached and missioned to improve the development process by using Requirement Management and ALM/PLM tools. It has been with limited success as most embedded engineers were focusing on implementation of the initial functionality, which typically were described in documents or spreadsheets. The products and applications were typically relative simple and future modifications and improvements were handled in the same documents

Now we experiences a huge request for Requirement and ALM/PLM tools. This is mainly because of the huge increase in complexity of the embedded application, which often have lifetimes of more than 15 years and the involvement of many developers.

Lets try to list 10 points why Application Lifecycle Management is so important:

  1. Smooth Development Process – Developing an application includes more of a team of developers, standardized processes, and documentation and not just only an awesome application developer. ALM tools can help you easily implant these processes and documentation in themselves by using them as a central hub for storing all the data related to the application development lifecycle. This will enable full traceability and hence, high accountability.
  2.  Preparing and Organizing the Development Process – ALM tools help manage the application development lifecycle. The planning phase begins as soon as the clients share their project requirements. With the help of ALM tools, you can draw up your plans more efficiently along with tools that fit your specific requirements. They can either support waterfall methodology or agile methodology or both.
  3.  Maintain Budgets & Productivity – The first step in any planning is to set up a financial budget. Choosing methodologies that can potentially drain budgets and productivity is simply a stupid move. ALM integration eliminates the requirements for varied environments for testing. Also, with all-in-one software, review and management become easier too.
  4.  Team Management – Communicative and coordinative workspace deeply suits an efficient and smooth software development. ALM can keep all the members on the same page with the real-time strategies, changed requirements, and regular project status. Remote jobs are highly and positively affected by this.
  5.  Speed + Quality – If the team does not collaborate appropriately, the chances for loopholes, delayed deliveries, and low product quality can increase. When you operate your project on ALM software, the integrated tools deliver the user requirements successfully, that too with high quality.
  6. Carrying the Load – There are high possibilities that the project might get stuck at some point. In case of such cases, apt choices and decisions are needed. ALM comprises the resources and processes in one tool which consequently, benefits the determination of solutions at each step.
  7. Employee Satisfaction – Employees show their dedication and interest through their productivity levels. Appreciating their efforts and choices is a must. ALM provides the freedom to the employees to use the tools and make their own choices and decisions. This keeps them motivated and satisfied enhancing their productivity.
  8. Team Productivity – Team productivity is of the utmost importance for a successful outcome in any project. ALM integrated software helps in distributing and allocating the tasks easily. Also, it helps to track productivity, quality, and progress regularly too.
  9. Fixing Bugs – Testing is done to make sure that the application has as lesser bugs as possible. ALM tools provide a platform for uniting the development and testing processes. This helps reduce the chances of loopholes and enhances the quality of the application.
  10. Customer Satisfaction – Every service by every organization strives in order to satisfy its customers. ALM tools help maintain high visibility and transparency amongst the service provider and the clients.

 

What is Traceability?

What is Traceability?

It’s a long journey from the initial requirements to final deliverables, and there are many things that can go wrong along the way. To ensure that deliverables maintain alignment with business requirements, project managers should identify, track, and trace requirements from its origins, through its development and specification, to its subsequent deployment and use, leveraging the power of a Requirements Traceability Matrix (RTM) and Requirements Management (RM) software.

Traceability Explained

Simply put, traceability is the ability to trace something. Across industries, including healthcare, manufacturing, supply chain, and software development, traceability ensures that final deliverables don’t stray too far away from original requirements.

The term itself is a blend of two words—trace and ability—and it underpins three critical business management processes: quality management (which enables organizations to hit quality targets/meet customer expectations), change management (which tracks changes to product during development), and risk management (which tracks and verifies vulnerabilities to product integrity).

Traceability is now more important than ever due to various government regulations and the increased pressure on organizations across industries to improve product quality and adhere to strict safety and security standards.

Traceability provides several important benefits that make it well worth the extra effort. By providing a complete, trustworthy record of all past activity, it helps investigate and troubleshoot issues during events such as recalls, allowing stakeholders to locate the source of the problem. The data generated by traceability can be used to improve critical business processes and address performance issues related to lead times, transportation costs, and inventory management, among other things.

Requirement Traceability

When most people say “traceability,” what they actually mean is requirement traceability, which is defined as the ability to describe and follow the life of a requirement in both a forward and backward direction in the development lifecycle, from its origins to deployment and beyond.

The purpose of requirement traceability is to provide visibility over requirements and make it possible to easily verify that requirements are met. Requirement traceability also helps analyze the impact of changes by revealing how a change made to one requirement impacts other requirements.

Requirements can be tracked either manually or using various requirement tracking software tools. Requirement tracking software tools make the process far less cumbersome and error-prone, and they come with a number of extra features to provide a systematic way of documenting, analyzing, and prioritizing requirements.

Standards that Requires Traceability

With traceability come many important benefits that make it a worthwhile activity, but there are also multiple standards that prescribe it for specific industries or types of products.

For example, the DO-178B guideline deals with the safety of safety-critical software used in certain airborne systems. It assures the development of safe and reliable software for airborne environments by concentrating on objectives for software life cycle processes and examining the effects of a failure condition in the system. DO-178B states that it should be possible to trace back to the origin of each requirement, which is why every change made to the requirement should be documented in order to achieve traceability. The same goes for DO-254, which is similar to DO-178B, except that it’s used for hardware instead of software.

Other standards that require traceability include ISO 26262, which is an international standard for functional safety of electrical and/or electronic systems in production automobiles defined by the International Organization for Standardization (ISO) in 2011, and IEC61508, an international standard published by the International Electrotechnical Commission consisting of methods on how to apply, design, deploy and maintain automatic protection systems, called safety-related systems.

Traceability Matrix

Traceability matrix is a very effective way of ensuring full requirements traceability. Traceability matrix establishes an audit trail by mapping artifacts of one type (such as requirements) depicted in columns to artifacts of another type (such as source code) depicted in rows, which results in a table-like representation of the traces between artifacts.

The image below shows an example of a traceability matrix generated by Visure Requirements.

A traceability matrix is a useful visual aid that makes a large amount of information visible at a glance, highlighting possible issues so they can be solved a long time before they have a chance to turn into big problems.

While easy to explain, traceability matrices can quickly become very complex and difficult to manage. For this reason, project managers seldom create them manually. Instead, they rely on requirements management tools to track changes to requirements during production, through ideation, to completion.

Traceability MatrixA well-designed traceability associates each requirement with the appropriate business objectives, making the evaluation of potential changes quick and easy, reduces project risk, promotes consistency between requirements, allows monitoring and control across the lifecycle of requirements, and more.

Ensuring Good Traceability with Visure

Visure Requirements is a feature-packed Requirements Management (RM) software tool designed to provide requirements traceability from the origins of requirements, through their development and specification, to the subsequent deployment and use of the product, and through periods of ongoing refinement and iteration in any of these phases.

Managing all requirement related information, their relationships, and their interactions with the users, Visure Requirements provides complete requirements traceability in one single tool, providing integral support to the complete requirement management process.

While the need for requirements traceability is universal, there is an infinite number of ways how to approach it because no two organizations are exactly alike. Visure Requirements is fully customizable and allows organizations to customize it according to their needs and preferences, including making changes to the available menus, toolbars, columns, buttons, and other components.

Visure Requirements can automatically generate a requirements traceability matrix, as well as other reports and dashboards, to display traceability between different levels of requirements, such as product, system, component requirements and design, and more

What is Impact Analysis? Best Practices for doing Change Impact Analysis

What is Impact Analysis? Best Practices for doing Change Impact Analysis

What is Impact Analysis?

Change is an inevitable part of the world. Hence, development is a continuous process. However, a newly introduced change might leave some impact on other areas of the application. Therefore, it is quite important to analyze the effect or impact, if you will, of the introduced change. That is what Impact analysis is all about.

Impact analysis, also known as change impact analysis, was first described in 1996 by American software engineers Robert S. Arnold and Shawn A. Bohner in their book called Software Maintenance. In the book, Arnold and Bohner stated that impact analysis is about “identifying the potential consequences of a change or estimating what needs to be modified to accomplish a change.”

Impact Analysis, as the name suggests, is about analyzing the impact of the changes in a product or application. It is one of the most integral steps in the development cycle of any product as it provides useful information about the areas of the system that might be affected by the change in any adverse way.

Types of Impact Analysis

According to Arnold and Bohner, there are three main types of impact analysis:

  1. Traceability Impact Analysis – Traceability impact analysis captures the links between requirements, specifications, design elements, and tests, analyzing their relationships to determine the scope of an initiating change. Manually determining what will be affected by a change can be extremely time-consuming in complex projects, which is where requirements management software comes in (more about it later in this article).
  2. Dependency Impact Analysis – This type of impact analysis is used to determine the depth of the impact on the system.
  3. Experiential Impact Analysis – Taking into account the prior experience of experts in the organization, experiential impact analysis studies what happened in similar situations in the past to determine what may happen in the future.

Advantages of Impact Analysis

As we mentioned earlier, Impact Analysis is one of the most integral steps in the development cycle of any product. The various advantages of impact analysis include:

  • Accuracy – Impact Analysis provides essential and accurate information regarding the changes in the modules of the application.
  • Enhanced Efficiency – Impact Analysis helps the testers plan better and more efficient test cases by providing clear and concise information about the changes and the effects of those changes.
  • Precision – Impact Analysis documents are pretty easily readable. Hence, they make it easier for the testers to understand the information and work with more precision.
  • Saves Time – With the help of impact analysis, testers can perform testing in modules or sub-modules rather than testing the whole application at once. They can properly prioritize the areas that need to be tested and thus save a lot of time.
  • Easy Bug Detection – Impact Analysis also improves bug detection as well. Impact analysis documents are quite helpful with integration testing.

Impact Analysis Document

An impact Analysis document is a document that is majorly used as a checklist. This checklist is used to evaluate the change requests before working on them. The details of an impact analysis document consist of include:

  • Description of the issue
  • Explanation of how the defect is causing failure or inefficiency
  • Estimation of the complexity
  • Estimation of the cost and time to fix the issue
  • Functionality that is to be tested
  • List of the new test cases created for the change
  • Reference document and technical specification
  • ….

Impact Analysis Procedure

There are 5 simple steps to conduct an effective impact analysis are:

  1. Prepare the team – Before we make any changes, we must prepare a team. All the team members must have an access to all the modules and attributes in the application and must also possess the required knowledge about the proposed changes.
  2. Inspect High-Level Modules – The team members will then analyze the high-level modules of the application which might be affected by the newly proposed change. This would provide them with a better knowledge of the workflow rules in the modules.
  3. Inspect Low-Level Modules – After analyzing the high-level modules, the team would move towards the low-level modules and identify the impact of the new changes. A separate document has to be prepared for all the modules.
  4. Evaluate the Impact – The documents prepared after analyzing the high and low-level modules will have all the details on the impact of the changes, both positive and negative. On the basis of this document, the testers will evaluate the identified impacts and will further get a clearer picture of the benefits and issues with the new changes.
  5. Work on Negative Impacts – When the team members have a better idea of the negative impacts, and now, they can work on them. They can consult with the team and stakeholders and discuss if the change should be implemented or not. Regression testing can also be performed in this situation.

 

Best Practices for Doing Change Impact Analysis

While it’s impossible to turn the experience of experts on impact analysis into just a few bullet points, there are some best practices for doing change impact analysis that everyone should know about.

doing change umpact analysis

  1. It’s useful to distinguish between quantitative (monetary) impacts and qualitative impacts.
  2. Never forget to closely define the scope of each impact analysis.
  3. Establish an impact analysis project team that represents all the areas within the scope of the impact analysis.
  4. It’s always easier to get people involved if you have obtained written executive commitment for the impact analysis.
  5. Take advantage of requirements management software tools to ensure end-to-end traceability.

Using a Requirements Management tool for Change Impact Analysis

It’s not an exaggeration to say that impact analysis is a key aspect of responsible requirements management because it provides an accurate understanding of the implications of a proposed change, helping everyone involved make informed decisions.

management tool for change impact analysisThe problem is that manually describing and tracking the life of a requirement from its conception, through the specification and development, and down to its deployment is nearly impossible on complex projects with thousands of artifacts. Requirements management tools such as Visure Requirements make it easy to identify the source of each requirement and track all changes affecting them, ensuring end-to-end traceability and providing accurate and documented information for impact analysis.

Original post by Visure Solutions can be read here

TRACE32® from Lauterbach Provides Support for Telechips’ Dolphin+, Dolphin3 SoCs and VCP MCU for Automotive Infotainment and ADAS

TRACE32® from Lauterbach Provides Support for Telechips’ Dolphin+, Dolphin3 SoCs and VCP MCU for Automotive Infotainment and ADAS

Lauterbach, the world’s leading supplier of debug and trace tools, announces the addition of support for Telechips’ Dolphin+ (TCC803x) and Dolphin3 (TCC805x) SoCs as well as the VCP (TCC70xx) MCU to their TRACE32® development system. TRACE32® is an out-of-the-box debug and trace solution, which covers all the needs of a modern embedded software development, like multicore debugging and tracing for symmetric (SMP) and asymmetric (AMP) multiprocessing, hypervisor and operating system-awareness, or code coverage analysis.

“Telechips’ Dolphin+ and Dophin3 SOCs have been targeting IVI and Automotive Cockpit system such as Digital Cluster, HUD (Head Up Display) and AVM (Around View Monitoring). Dolphin+ and Dolphin3 are designed based on ARM® Cortex® multi-cores and a powerful GPU bringing excellent 3D Graphics.” said Stanley Kim, VP of Business Unit at Telechips. “Telechips’ recently launched MCU product line called VCP (Vehicle Control Processor) MCU having capabilities of handling whole the automotive smart functional systems supporting various applications such as Telematics, Digital Cluster, CID (Central Information Display), Wireless Charger as well as IVI and can be operated with extremely low power consumptions over a wide scalability and an advanced functional safety.”

Klar til at bestille? Eller bare til at høre mere?

Giv os et ring på 44 52 16 50, eller udfyld felterne, så ringer vi til dig!