Skip to main content

Who Else Wants To Know The Mystery Behind C++ COMPILER?

 Intel oneAPI DPC++/C++ Compiler and Intel C++ Compiler Classic are Intel’s C, C++, SYCL, and Data Parallel C++ (DPC++) compilers for Intel processor-based systems, available for Windows, Linux, and macOS operating systems.[3]


Overview

Intel oneAPI DPC++/C++ Compiler is available for Windows and Linux and supports compiling C, C++, SYCL, and Data Parallel C++ (DPC++) source, targeting Intel IA-32, Intel 64 (aka x86-64), Core, Xeon, and Xeon Scalable processors, as well as GPUs including Intel Processor Graphics Gen9 and above, Intel Xe architecture, and Intel Programmable Acceleration Card with Intel Arria 10 GX FPGA.[4] Like Intel C++ Compiler Classic, it also supports the Microsoft Visual Studio and Eclipse IDE development environments, and supports threading via Intel oneAPI Threading Building Blocks, OpenMP, and native threads.


DPC++[5][6] builds on the SYCL specification from The Khronos Group. It is designed to allow developers to reuse code across hardware targets (CPUs and accelerators such as GPUs and FPGAs) and perform custom tuning for a specific accelerator. DPC++ comprises C++17 and SYCL language features and incorporates open-source community extensions that make SYCL easier to use. Many of these extensions were adopted by the SYCL 2020 provisional specification[7] including unified shared memory, group algorithms, and sub-groups.


Intel announced in August 2021 the complete adoption of LLVM for faster build times and benefits from supporting the latest C++ standards.[8]


Intel C++ Compiler Classic is available for Windows, Linux, and macOS and supports compiling C and C++ source, targeting Intel IA-32, Intel 64 (x86-64), Core, Xeon, and Xeon Scalable processors.[4] It supports the Microsoft Visual Studio and Eclipse IDE development environments. Intel C++ Compiler Classic supports threading via Intel oneAPI Threading Building Blocks, OpenMP, and native threads.


Architectures

According to Intel,[9] starting with the 2023.0 release, Intel oneAPI DPC++/C++ Compiler supports all current Intel general-purpose x86-64 CPUs and GPUs including:


Processors:

Legacy Intel IA-32 and Intel 64 (x86-64) processors

Intel Core processors

Intel Xeon processor family

Intel Xeon Scalable processors

Intel Xeon Processor Max Series

GPUs:

Intel Processor Graphics Gen9 and above

Intel Xe architecture

Intel Programmable Acceleration Card with Intel 10 GX FPGA

Intel Data Center GPUs including Flex Series and Max Series

Intel FPGAs

Intel C++ Compiler Classic targets general-purpose Intel x86-64 architecture CPUs including:[4]


Legacy Intel IA-32 and Intel 64 (x86-64) processors

Intel Core processors

Intel Xeon processor family

Intel Xeon Scalable processors

Toolkits

The Intel oneAPI DPC++/C++ Compiler is available either as a standalone component[10] or as part of the Intel oneAPI Base Toolkit, Intel oneAPI HPC Toolkit, and Intel oneAPI IoT Toolkit.[4]


The Intel C++ Compiler Classic is available either as a standalone component[11] or as part of the Intel oneAPI Base Toolkit.[4]


Documentation

Documentation can be found at the Intel Software Technical Documentation site.


Debugging

The Intel compiler provides debugging information that is standard for the common debuggers (DWARF 2 on Linux, similar to gdb, and COFF for Windows). The flags to compile with debugging information are /Zi on Windows and -g on Linux. Debugging is done on Windows using the Visual Studio debugger and, on Linux, using gdb.


While the Intel compiler can generate a gprof compatible profiling output, Intel also provides a kernel level, system-wide statistical profiler called Intel VTune Profiler. VTune can be used from a command line or through an included GUI on Linux or Windows. It can also be integrated into Visual Studio on Windows, or Eclipse on Linux). In addition to the VTune profiler, there is Intel Advisor that specializes in vectorization optimization, offload modeling, flow graph design and tools for threading design and prototyping.


Intel also offers a tool for memory and threading error detection called Intel Inspector XE. Regarding memory errors, it helps detect memory leaks, memory corruption, allocation/de-allocation of API mismatches and inconsistent memory API usage. Regarding threading errors, it helps detect data races (both heap and stack), deadlocks and thread and synch API errors.


Support for non-Intel processors

Previous versions of Intel’s C/C++ compilers have been criticized for optimizing less aggressively for non-Intel processors; for example, Steve Westfield wrote in a 2005 article at the AMD website:[12]


Intel 8.1 C/C++ compiler uses the flag -xN (for Linux) or -QxN (for Windows) to take advantage of the SSE2 extensions. For SSE3, the compiler switch is -xP (for Linux) and -QxP (for Windows). ... With the -xN/-QxN and -xP/-QxP flags set, it checks the processor vendor string—and if it's not "GenuineIntel", it stops execution without even checking the feature flags. Ouch!


The Danish developer and scholar Agner Fog wrote in 2009:[13]


The Intel compiler and several different Intel function libraries have suboptimal performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string is "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.


This vendor-specific CPU dispatching may potentially impact the performance of software built with an Intel compiler or an Intel function library on non-Intel processors, possibly without the programmer’s knowledge. This has allegedly led to misleading benchmarks,[13] including one incident when changing the CPUID of a VIA Nano significantly improved results.[14] In November 2009, AMD and Intel reached a legal settlement over this and related issues,[15] and in late 2010, AMD settled a US Federal Trade Commission antitrust investigation against Intel.[16]


The FTC settlement included a disclosure provision where Intel must:[17]


publish clearly that its compiler discriminates against non-Intel processors (such as AMD's designs), not fully utilizing their features and producing inferior code.


In compliance with this ruling, Intel added disclaimers to its compiler documentation:[18]


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.


As late as 2013, an article in The Register alleged that the object code produced by the Intel compiler for the AnTuTu Mobile Benchmark omitted portions of the benchmark which showed increased performance compared to ARM platforms.[19]


Release history

The following lists versions of the Intel C++ Compiler since 2003:[20]


Compiler version Release date Major new features

Intel C++ Compiler 8.0 December 15, 2003 Precompiled headers, code-coverage tools.

Intel C++ Compiler 8.1 September 2004 AMD64 architecture (for Linux).

Intel C++ Compiler 9.0 June 14, 2005 AMD64 architecture (for Windows), software-based speculative pre-computation (SSP) optimization, improved loop optimization reports.

Intel C++ Compiler 10.0 June 5, 2007 Improved parallelizer and vectorizer, Streaming SIMD Extensions 4 (SSE4), new and enhanced optimization reports for advanced loop transformations, new optimized exception handling implementation.

Intel C++ Compiler 10.1 November 7, 2007 New OpenMP* compatibility runtime library: if you use the new OpenMP RTL, you can mix and match with libraries and objects built by Visual C++. To use the new libraries, you need to use the new option "-Qopenmp /Qopenmp-lib:compat" on Windows, and "-openmp -openmp-lib:compat" on Linux. This version of the Intel compiler supports more intrinsics from Visual Studio 2005.

VS2008 support - command line only in this release. The IDE integration was not supported yet.


Intel C++ Compiler 11.0 November 2008 Initial C++11 support. VS2008 IDE integration on Windows. OpenMP 3.0. Source Checker for static memory/parallel diagnostics.

Intel C++ Compiler 11.1 June 23, 2009 Support for latest Intel SSE SSE4.2, AVX and AES instructions. Parallel Debugger Extension. Improved integration into Microsoft Visual Studio, Eclipse CDT 5.0 and Mac Xcode IDE.

Intel C++ Composer XE 2011 up to Update 5 (compiler 12.0) November 7, 2010 Cilk Plus language extensions, Guided Auto-Parallelism, Improved C++11 support.[21]

Intel C++ Composer XE 2011 Update 6 and above (compiler 12.1) September 8, 2011 Cilk Plus language extensions updated to support specification version 1.1 and available on Mac OS X in addition to Windows and Linux, Threading Building Blocks updated to support version 4.0, Apple blocks supported on Mac OS X, improved C++11 support including support for Variadic templates, OpenMP 3.1 support.

Intel C++ Composer XE 2013 (compiler 13.0) September 5, 2012 Linux-based support for Intel Xeon Phi coprocessors, support for Microsoft Visual Studio 12 (Desktop), support for gcc 4.7, support for Intel AVX 2 instructions, updates to existing functionality focused on improved application performance.[22]

Intel C++ Composer XE 2013 SP1 (compiler 14.0) September 4, 2013 Online installer; support for Intel Xeon Phi coprocessors; preview Win32 only support for Intel graphics; improved C++11 support

Intel C++ Composer XE 2013 SP1 Update 1 (compiler 14.0.1) October 18, 2013 Japanese localization of 14.0; Windows 8.1 and Xcode 5.0 support

Intel C++ Compiler for Android (compiler 14.0.1) November 12, 2013 Hosted on Windows, Linux, or OS X, compatible with Android NDK tools including the gcc compiler and Eclipse

Intel C++ Composer XE 2015 (compiler 15.0) July 25, 2014 Full C++11 language support; Additional OpenMP 4.0 and Cilk Plus enhancements

Intel C++ Composer XE 2015 Update 1 (compiler 15.0.1) October 30, 2014 AVX-512 support; Japanese localization

Intel C++ 16.0 August 25, 2015 Suite-based availability (Intel Parallel Studio XE, Intel System Studio)

Intel C++ 17.0 September 15, 2016 Suite-based availability (Intel Parallel Studio XE, Intel System Studio)

Intel C++ 18.0 January 26, 2017 Suite-based availability (Intel Parallel Studio XE, Intel System Studio)

Intel C++ 19.0 April 3, 2018 Suite-based availability (Intel Parallel Studio XE, Intel System Studio)

Intel C++ Compiler Classic 19.1 October 22, 2020 Initial Open MP 5.1 CPU only

Intel oneAPI DPC++ / C++ Compiler 2021 December 8, 2020 SYCL, DPC++, initial Open MP 5.1

Intel Intel C++ Compiler Classic 2021.1.2

Intel oneAPI DPC++/C++ Compiler 2021.1.2 December 16, 2020 oneAPI DPC++/C++ introduces support for GPU offloading

Intel Intel C++ Compiler Classic 2022.2.1

Intel oneAPI DPC++/C++ Compiler 2022.2.1 November 2, 2022 Support for latest Intel CPUs, GPUs, and FPGAs;

support for upcoming ISO/IEC 9899:2023 (C23) and ISO/IEC 14882:2023 (C++23) language standards

Intel Intel C++ Compiler Classic 2023.0

Intel oneAPI DPC++/C++ Compiler 2023.0 Q1 2023[9] Support for Intel Advanced Matrix Extensions (Intel AMX), Quick Assist Technology (QAT), Intel AVX-512 with Vector Neural Network Instructions (VNNI), bfloat16, GPU datatype flexibility, Intel Xe matrix extensions (Intel XMX), Intel GPU vector engine, XE-Link

See also

oneAPI Data Analytics Library (oneDAL)

Intel Developer Zone (Intel DZ; support and discussion)

Intel Fortran Compiler

Intel Integrated Performance Primitives (IPP)

Intel oneAPI Math Kernel Library (oneMKL)

Intel Parallel Studio

Cilk Plus

VTune Amplifier

AMD Optimizing C/C++ Compiler

GNU Compiler Collection

LLVM/Clang




References[edit]

  1. ^ Intel Corporation (2022-11-02). "Intel® oneAPI DPC++/C++ Compiler"software.intel.comIntel. Retrieved 2022-12-01.
  2. ^ Intel Corporation (2022-11-02). "Intel® C++ Compiler Classic"software.intel.comIntel. Retrieved 2022-12-01.
  3. ^ Intel (2021). "Intel oneAPI DPC++/C++ Compiler"Intel.comIntel. Retrieved 2021-02-09.
  4. Jump up to:a b c d e Intel Corporation (2021). "Intel® oneAPI DPC++/C++ Compiler"software.intel.comIntel. Retrieved 2021-02-09.
  5. ^ "Intel oneAPI DPC++ Compiler 2020-06 Released With New Features - Phoronix"www.phoronix.com. Retrieved 2020-12-17.
  6. ^ Team, Editorial (2019-12-16). "Heterogeneous Computing Programming: oneAPI and Data Parallel C++"insideBIGDATA. Retrieved 2020-12-17.
  7. ^ "Khronos Steps Towards Widespread Deployment of SYCL with Release of SYCL 2020 Provisional Specification"The Khronos Group. 2020-06-30. Retrieved 2020-12-17.
  8. ^ "Intel C/C++ compilers complete adoption of LLVM"Intel. Retrieved 2021-08-17.
  9. Jump up to:a b Intel Corporation (November 30, 2022). "Intel oneAPI 2023 Release: Preview the Tools"www.intel.comIntel. Retrieved 2022-12-01.
  10. ^ Intel Corporation (2020-12-16). "Intel® oneAPI DPC++/C++ Compiler"software.intel.comIntel. Retrieved 2021-02-09.
  11. ^ Intel Corporation (2020-12-16). "Intel® C++ Compiler Classic"software.intel.comIntel. Retrieved 2021-02-09.
  12. ^ "Your Processor, Your Compiler, and You: The Case of the Secret CPUID String". Archived from the original on 2012-01-05. Retrieved 2011-12-11.
  13. Jump up to:a b "Agner's CPU blog - Intel's "cripple AMD" function"www.agner.org.
  14. ^ Hruska, Joel (29 July 2008). "Low-end grudge match: Nano vs. Atom". Ars Technica.
  15. ^ "Settlement agreement" (PDF)download.intel.com.
  16. ^ "Intel and U.S. Federal Trade Commission Reach Tentative Settlement". Newsroom.intel.com. 2010-08-04. Retrieved 2012-10-13.
  17. ^ "FTC, Intel Reach Settlement; Intel Banned From Anticompetitive Practices". Archived from the original on 2012-02-03. Retrieved 2011-10-20.
  18. ^ "Optimization Notice". Intel Corporation. Retrieved 11 December 2013.
  19. ^ "Analyst: Tests showing Intel smartphones beating ARM were rigged"The Register.
  20. ^ "Intel® C++ Compiler Release Notes and New Features". Intel Corporation. Retrieved 27 April 2021.
  21. ^ This note is attached to the release in which Cilk Plus was introduced. This URL points to current documentation: http://software.intel.com/en-us/intel-composer-xe/
  22. ^ Intel C++ Composer XE 2013 Release Notes[1] http://software.intel.com/en-us/articles/intel-c-composer-xe-2013-release-notes/

Comments

Popular posts from this blog

Who Else Wants To Know The Mystery Behind C++ Output ?

 C++ Output (Print Text) The cout object, together with the << operator, is used to output values/print text: for example ; #include <iostream> using namespace std; int main() {   cout << "Hello World!";   return 0; } Note: You can add as many cout objects as you want. However, note that it does not insert a new line at the end of the output: #include <iostream> using namespace std; int main() {   cout << "Hello World!";   cout << "I am learning C++";   return 0; }

The Secret of Successful TOP MOBILE BRANDS #bbk electronics

Everything you should know about BBK Electronics, its founder, brands & products, and net worth. The name BBK Electronics might not sound common to many out there, but, if you’ve used an Oppo, Vivo, OnePlus, Realme, or iQOO smartphone before, then you’ve used a BBK Electronics product. What is BBK Electronics? BBK Electronics. It markets smartphones under the Realme, OPPO, Vivo and OnePlus brands, and Blu-ray players, headphones and headphone amplifiers under the OPPO Digital division. BBK Electronics: Meet the world’s 2nd largest smartphone manufacturer you’ve probably never heard. Apple and Samsung are undeniably the most popular smartphone brands today. Shipping almost 300 million units last 2019, Samsung easily beats the pack to become the world’s largest phone manufacturer. All of this is expected because everybody’s heard about Samsung, and everybody either owns a Samsung phone or knows somebody who does. But what do you know about BBK Electronics, the world’s second-largest ...

How to make FM Radio Circuit with simple circuit

 FM Radio circuit is the simple circuit that can be tuned to the required frequency locally. This article describes the circuit of FM radio circuit. This is a pocket sized radio circuit. FM Radio Circuit Principle: Radio is the reception of electromagnetic wave through air. The main principle of this circuit is to tune the circuit to the nearest frequency using the tank circuit. Data to be transmitted is frequency modulated at the transmission and is demodulated at the receiver side. Modulation is nothing but changing the property of the message signal with the respect to the carrier frequency. Frequency range of FM signal is 87.5MHz to 108.0MHz. The output can be heard using  speaker. Circuit Components: LM 386 IC. BF 494 transistor  T1, T2. Variable resistor. Variable capacitor. Inductor coil. FM Radio Circuit Design: The LM386 IC is used extensively in the FM Radio circuit. This is a low-voltage power amplifier for audio. It has a total of eight pins. It requires a sup...