Quantcast
Channel: Recent posts
Viewing all 190 articles
Browse latest View live

Problem building 64 bit numpy using MKL and vc11 (Windows)

$
0
0

 

Hi, I’ve built numpy (1.9) 64 bit using vc11, the Intel Fortran compiler and the MKL ‘mkl_rt’ library.

*why? (see end of message for the reason, if interested)

 

Any advice or assistance would be greatly appreciated.  If I can offer additional information, I will happily do so.

The build appears to go just fine (no errors noted), and numpy loads into python just fine as well.

(I note a warning:  ### Warning:  python_xerbla.c is disabled ### -- however, it doesn’t appear to be problematic?)

I have also confirmed that numpy sees the mkl blas and lapack libs. 

 

>>> numpy.__config__.show()

lapack_opt_info:

    libraries = ['mkl_lapack', 'mkl_rt']

    library_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\lib\\intel64']

    define_macros = [('SCIPY_MKL_H', None)]

    include_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\include']

blas_opt_info:

    libraries = ['mkl_rt']

    library_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\lib\\intel64']

    define_macros = [('SCIPY_MKL_H', None)]

    include_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\include']

openblas_lapack_info:

  NOT AVAILABLE

lapack_mkl_info:

    libraries = ['mkl_lapack', 'mkl_rt']

    library_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\lib\\intel64']

    define_macros = [('SCIPY_MKL_H', None)]

    include_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\include']

blas_mkl_info:

    libraries = ['mkl_rt']

    library_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\lib\\intel64']

    define_macros = [('SCIPY_MKL_H', None)]

    include_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\include']

mkl_info:

    libraries = ['mkl_rt']

    library_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\lib\\intel64']

    define_macros = [('SCIPY_MKL_H', None)]

    include_dirs = ['C:\\Program Files (x86)\\Intel\\Composer XE 2015\\mkl\\include']

 

Everything *looks* to be in order upon casual inspection (*I think*, please correct me if I’m wrong!)

However, there is no performance boost when running a few different tests in numpy (singular value decomposition, for example), and only a single thread appears to be in play.

 

Running numpy.test(‘full’) reveals 21 errors.

 

For instance,

LINK : fatal error LNK1104: cannot open file 'ifconsol.lib'

 

And, the other being a recurring error with f2py,

 

ERROR: test_size.TestSizeSumExample.test_transpose

----------------------------------------------------------------------

Traceback (most recent call last):

  File "C:\Program Files\Side Effects Software\Houdini 13.0.509\python27\lib\site-packages\nose\case.py", line 371, in setUp

    try_run(self.inst, ('setup', 'setUp'))

  File "C:\Program Files\Side Effects Software\Houdini 13.0.509\python27\lib\site-packages\nose\util.py", line 478, in try_run

    return func()

  File "C:\Program Files\Side Effects Software\Houdini 13.0.509\python27\lib\site-packages\numpy\f2py\tests\util.py", line 353, in setUp

    module_name=self.module_name)

  File "C:\Program Files\Side Effects Software\Houdini 13.0.509\python27\lib\site-packages\numpy\f2py\tests\util.py", line 80, in wrapper

   raise ret

RuntimeError: Running f2py failed: ['-m', '_test_ext_module_5403', 'c:\\users\\jareyn~1\\appdata\\local\\temp\\tmpvykewl\\foo.f90']

Reading .f2py_f2cmap ...

        Mapping "real(kind=rk)" to "double"

Succesfully applied user defined changes from .f2py_f2cmap

 

 

Everything that requires configuration appears to be in agreement with this Intel Application Note, minus use of the Intel C++ compiler:

https://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl

I have also referenced the Windows build docs on scipy.org:

http://www.scipy.org/scipylib/building/windows.html#building-scipy

Some info about my configuration:

site.cfg:

include_dirs = C:\Program Files (x86)\Intel\Composer XE 2015\mkl\include

library_dirs = C:\Program Files (x86)\Intel\Composer XE 2015\mkl\lib\intel64

mkl_libs = mkl_rt

 

PATH = (paths separated by line for easy reading)

C:\Program Files\Side Effects Software\Houdini 13.0.509\python27;

C:\Program Files\Side Effects Software\Houdini 13.0.509\python27\Scripts;

C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\x86_amd64;

C:\Program Files (x86)\Intel\Composer XE 2015\bin\intel64

 

LD_LIBRARY_PATH =

C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\x86_amd64;

C:\Program Files (x86)\Intel\Composer XE 2015\bin\intel64;

C:\Program Files (x86)\Intel\Composer XE 2015\mkl\lib\intel64;

C:\Program Files (x86)\Intel\Composer XE 2015\compiler\lib\intel64

 

Thank you in advance for your time,  

 

-Jay

PS - Please note that I've cross-posted this message to the numpy-discussion mailing list as well.  I will post any useful information I receive here.

=====

*why am I doing this? 

The reason I’m doing this is because I need numpy with MKL to run with the version of python that comes packaged with Houdini (Python 2.7.5 (default, Oct 24 2013, 17:49:49) [MSC v.1700 64 bit (AMD64)] on win32).

So, downloading a prebuilt 64 bit numpy isn’t an option due to the unavailability of a compatible compiler version.


log sum and under/overflow

$
0
0

I have converted some neural net code from matlab which consists of adding/subtracting very small probabilities and is of the form log( sum( Array) ). This may be affected by underflow. There is a common workaround on the internet called the log sum exp trick which involves shifting back and forward by a value equal to maxval(Array)  see http://machineintelligence.tumblr.com/post/4998477107/the-log-sum-exp-trick for example. I could replicate this is fortran but before I do I though I would ask. Is there a MKL function that computes log( sum (Array) )) with minimal underflow/overflow before I reinvent the wheel - Here is the matlab code - repmat is similar to fortran spread(), ones creates a matrix of 1's and 

Alternately are there any fortran specific tricks for handling very small numbers accurately ?

if(length(xx(:))==1) ls=xx; return; end

xdims=size(xx);
if(nargin<2)
  dim=find(xdims>1);
end

alpha = max(xx,[],dim)-log(realmax)/2;
repdims=ones(size(xdims)); repdims(dim)=xdims(dim);
ls = alpha+log(sum(exp(xx-repmat(alpha,repdims)),dim));

 

Installing MKL Blas95 and Lapack95 for Intel Fortran

$
0
0

Hi,

I am using Intel Parallel Studio XE Composer 2013 (formerly Intel Fortran Composer XE 2013) together with Microsoft Visual Studio 2012 Professional to write Fortran programs.  I would like to install the Intel MKL Blas95 and Lapack95 so that I can call routines of Lapack95 from within a Fortran program.  I would like to know the steps that are required for such installation.  I have downloaded the Intel mkl library package.

With Thanks

I-Lok Chang 

 

Incosistent in-place and out-of-place results in mkl_dfti

$
0
0

Hi,

I am trying to transfer from out-of-place calculation to in-place using MKL 10.3.12. I don't see any problem when doing forward FFT. However in backward FFT, for dimensions larger than 4, I get inconsistent results for in-place and out-of-place calculations. This happens when I use backward scaling of 1.0 (which I need in my problem), and the issue is resolved when scaling of 1/(K1*K2*K3) is used instead!. I have attached a minimal code for reproducing the results. I compiled it with:

gfortran -fcray-pointer -I$myMKLINC main.f90 -L$MKLROOT/lib/intel64/ -L/opt/intel/composer_xe_2011_sp1.12.361/compiler/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -liomp5 -o amain

Thanks for any help

Amir

occured linking errors while using CLUSTER_SPARSE_SOLVER

$
0
0

Hello Everyone,

           While using 'CLUSTER_SPARSE_SOLVER' for solving sparse matrix, I got linking errors. I have included "mkl_cluster_sparse_solver.h" as well as "mpi.h" files also. What should I do next?

Thanks in advance.

Mayur

BUILD LOG :-

1>------ Build started: Project: MKLWrapper, Configuration: Release x64 ------
1>Build started 12/5/2014 3:53:24 PM.
1>InitializeBuildStatus:
1>  Touching "..\..\Obj\x64\Release\MKLWrapper\MKLWrapper.unsuccessfulbuild".
1>ClCompile:
1>  All outputs are up-to-date.
1>  All outputs are up-to-date.
1>  MKLWrapper.cpp
1>Link:
1>     Creating library ..\..\Obj\x64\Release\MKLWrapper\..\MKLWrapper.lib and object ..\..\Obj\x64\Release\MKLWrapper\..\MKLWrapper.exp
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Barrier
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Comm_rank
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Comm_size
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Irecv
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Recv
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Isend
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Send
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Test
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Bcast
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Comm_split
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Scatterv
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Gatherv
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Allgather
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Reduce
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Alltoall
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Alltoallv
1>mkl_core.lib(cpardiso_blacs.obj) : error LNK2001: unresolved external symbol MKL_Comm_free

Announcing Intel® Data Analytics Acceleration Library 2016 Beta

$
0
0

We are pleased to announce the release of Intel® Data Analytics Acceleration Library 2016 Beta!

Intel® Data Analytics Acceleration Library is a C++ and Java API library of optimized analytics building blocks for all data analysis stages, from data acquisition to data mining and machine learning. It is a library essential for engineering high performance data application solutions.

To join the free Beta program and get instructions on downloading the software, follow the links below:

Visit our User Forum and join the discussions if you have any questions.

This is the initial Beta release and it has introduced many features including:

  • C++ and Java programming languages API.
  • Data mining and analysis algorithms for
    • Computing correlation distance and Cosine distance
    • PCA (Correlation, SVD)
    • Matrix decomposition (SVD, QR, Cholesky)
    • Computing statistical moments
    • Computing variance-covariance matrices
    • Univariate and multivariate outlier detection
    • Association rule mining
  • Algorithms for supervised and unsupervised machine learning:
    • Linear regressions
    • Naïve Bayes classifier
    • AdaBoost, LogitBoost, and BrownBoost classifiers
    • SVM
    • K-Means clustering
    • Expectation Maximization (EM) for Gaussian Mixture Models (GMM)
  • Support for local and distributed data sources:
    • In-file and in-memory CSV
    • MySQL
    • HDFS
  • Support for Resilient Distributed Dataset (RDD) objects for Apache Spark*. 
  • Data compression and decompression:
    • ZLIB
    • LZO
    • RLE
    • BZIP2
  • Data serialization and deserialization.

Supported operating systems are Windows*, Linux*, and OS X*. 

Please join other users in discussions about Intel Data Analy on the User Forum

This program will end in late June 2015. During the program, we will contact you to gain feedback. Thank you for your interest in Intel Data Analytics Acceleration Library. Let us know how we can help you!

What programming languages does this library support?

$
0
0

Question : 

What programming languages does this library support?

Is this library related to Hadoop? How do I use it on Hadoop?

$
0
0

Is this library related to Hadoop? How do I use it on Hadoop?


Is library a standalone, or a component of some Intel software development products?

$
0
0

Is library a standalone, or a component of some Intel software development products?

Intel® Math Kernel Library (Intel® MKL) 11.3 Beta Release Notes

$
0
0

This document provides a general summary of new features and important notes about the Intel® Math Kernel Library (Intel® MKL) software product.

Please see the following links to the online resources and documents for the latest information regarding Intel MKL:

Links to documentation, help, and code samples can be found on the main Intel MKL product page. For technical support visit the Intel MKL technical support forum and review the articles in the Intel MKL knowledge base.

Please register your product using your preferred email address. This helps Intel recognize you as a valued customer in the support forum and ensures that you will be notified of product updates. You can read Intel's Online Privacy Notice Summary if you have any questions regarding the use of your email address for software product registration.

What's New in Intel® Math Kernel Library (Intel® MKL) 11.3 Beta

  • Introduced MPI wrappers that allow users to build custom BLACS library for most MPI implementations.
  • Cluster components (Cluster Sparse Solver, Cluster FFT, ScaLAPACK) are now available for OS X*.
  • Extended the Intel MKL memory manager to improve scaling on large SMP systems.
  • BLAS
    • Introduced ?GEMM_BATCH and (C/Z)GEMM3M_BATCH functions to perform multiple independent matrix-matrix multiply operations.
    • Improved parallel performance of (D/S)SYMV on all Intel Xeon processors.
    • Improved (C/D/S/Z/DZ/SC)ROT performance for Intel Advanced Vector Extensions (Intel AVX) architectures in 64-bit Intel MKL.
    • Improved (C/Z)ROT performance for Intel Advanced Vector Extensions 2 (Intel AVX2) architectures in 64-bit Intel MKL.
    • Improved parallel performance of ?SYRK/?HERK, ?SYR2K/?HER2K, and DGEMM for cases with large k sizes on Intel AVX2 architectures in 64-bit Intel MKL.
    • Improved ?SYRK/?HERK and ?SYR2K/?HER2K performance on Intel Xeon Phi coprocessors.
  • Sparse BLAS
    • Introduced new 2-stage (inspector-executor) APIs for Level 2 and Level 3 sparse BLAS functions. This feature provides performance benefits to some applications (e.g. iterative solvers), where a matrix structure analysis done in the first (inspector) stage allows better optimizations for operations in the subsequent (executor) stage. 
    • Both 0-based and 1-based indexing, and both row-major and column-major ordering of matrices, are supported in all new 2-stage API functions.
    • Extended BSR format support to matrix addition and multiplication where both operands are sparse, as well as to the triangular solvers for block triangular and block diagonal sparse matrices. 
  • LAPACK and ScaLAPACK
    • LAPACK 3.5.0 compatibility provides 70 new functions, including symmetric/hermitian LDLT factorization with rook pivoting, and CS decomposition for tall and skinny matrices with orthonormal columns.
    • Significantly improved performance of LU factorization for non-square matrices on Intel AVX2 architectures.
    • Improved the performance of LU factorization when CNR (conditional numerical reproducibility) is enabled, narrowing the performance gap between CNR-on and CNR-off to no more than 5%. 
    • Improved performance of symmetric eigensolver on Intel AVX2 architectures, for cases where only eigenvalues are computed.
    • Improved performance of SVD, for cases where singular vectors are computed, on multi-socket systems based on Intel AVX or Intel AVX2 architectures.
  • Intel MKL PARDISO
    • Significantly improved scalability on Intel Xeon Phi coprocessors.
  • Random Number Generators
    • Introduced counter-based pseudorandom number generator ARS-5 based on the Intel AES-NI instruction set.
    • Introduced counter-based pseudorandom number generator Philox4x32-10.
  • Documentation
    • Introduced a new C-language version of the Intel MKL Reference manual, the Reference Manual for Intel® Math Kernel Library – C is available at <install-dir>/documentation_2016/en/mkl/common/mklman-c/index.htm. It documents a subset of the product domains available in the full Reference Manual for Intel® Math Kernel Library, which generally documents Fortran syntax. The domains documented in the C version include:
      • Basic Linear Algebra Subprograms (BLAS)
      • Sparse BLAS
      • LAPACK routines for solving systems of linear equations, least squares problems, eigenvalue and singular value problems, and Sylvester equations
      • Direct and Iterative Sparse Solver Routines
      • Extended Eigensolver Routines
      • Vector Mathematical Functions
      • Vector Statistical Functions
      • General Fast Fourier Transform (GGT) functions
      • Cluster FFT functions
      • Tools for solving non-linear optimization problems and partial differential equations.
  • Important Note : Intel® MKL Cluster Support for IA -32 is now deprecated and support will be removed starting Intel® MKL 11.3.

Product Contents

Now Intel MKL consists in one package for both IA-32 and Intel® 64 architectures and in online installer

Technical Support

If you did not register your Intel software product during installation, please do so now at the Intel® Software Development Products Registration Center. Registration entitles you to free technical support, product updates, and upgrades for the duration of the support term.

For general information about Intel technical support, product updates, user forums, FAQs, tips and tricks and other support questions, please visit http://www.intel.com/software/products/support/.

Note: If your distributor provides technical support for this product, please contact them rather than Intel.

For technical information about Intel MKL, including FAQs, tips and tricks, and other support information, please visit the Intel MKL forum: http://software.intel.com/en-us/forums/intel-math-kernel-library/ and browse the Intel MKL knowledge base: http://software.intel.com/en-us/articles/intel-mkl-kb/all/.

Attributions

As referenced in the End User License Agreement, attribution requires, at a minimum, prominently displaying the full Intel product name (e.g. "Intel® Math Kernel Library") and providing a link/URL to the Intel MKL homepage (http://www.intel.com/software/products/mkl) in both the product documentation and website.

The original versions of the BLAS from which that part of Intel MKL was derived can be obtained from http://www.netlib.org/blas/index.html.

The original versions of LAPACK from which that part of Intel MKL was derived can be obtained from http://www.netlib.org/lapack/index.html. The authors of LAPACK are E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. Our FORTRAN 90/95 interfaces to LAPACK are similar to those in the LAPACK95 package at http://www.netlib.org/lapack95/index.html. All interfaces are provided for pure procedures.

The original versions of ScaLAPACK from which that part of Intel MKL was derived can be obtained from http://www.netlib.org/scalapack/index.html. The authors of ScaLAPACK are L. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley.

The Intel MKL Extended Eigensolver functionality is based on the Feast Eigenvalue Solver 2.0 http://www.ecs.umass.edu/~polizzi/feast/

PARDISO (PARallel DIrect SOlver)* in Intel MKL was originally developed by the Department of Computer Science at the University of Basel http://www.unibas.ch . It can be obtained at http://www.pardiso-project.org.

Some FFT functions in this release of Intel MKL have been generated by the SPIRAL software generation system (http://www.spiral.net/) under license from Carnegie Mellon University. The Authors of SPIRAL are Markus Puschel, Jose Moura, Jeremy Johnson, David Padua, Manuela Veloso, Bryan Singer, Jianxin Xiong, Franz Franchetti, Aca Gacic, Yevgen Voronenko, Kang Chen, Robert W. Johnson, and Nick Rizzolo.

License Definitions

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.

Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.

Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site

Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details.

BlueMoon, BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Inside, Cilk, Core Inside, E-GOLD, Flexpipe, i960, Intel, the Intel logo, Intel AppUp, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Insider, the Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel Sponsors of Tomorrow., the Intel Sponsors of Tomorrow. logo, Intel StrataFlash, Intel vPro, Intel Xeon Phi, Intel XScale, InTru, the InTru logo, the InTru Inside logo, InTru soundmark, Itanium, Itanium Inside, MCS, MMX, Moblin, Pentium, Pentium Inside, Puma, skoool, the skoool logo, SMARTi, Sound Mark, Stay With It, The Creators Project, The Journey Inside, Thunderbolt, Ultrabook, vPro Inside, VTune, Xeon, Xeon Inside, X-GOLD, XMM, X-PMU and XPOSYS are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

Copyright © 2002-2015, Intel Corporation. All rights reserved.

Intel® Data Analytics Acceleration Library 2016 Beta Release Notes

$
0
0

This document provides a general summary of new features and important notes about the Intel® Data Analytics Acceleration library (Intel® DAAL) software product. Please see the following links to the online resources and documents for the latest information regarding Intel DAAL:

Programming guide, API references, help, and code samples are installed with the library and are accessible from the installation location. 

Please register your product using your preferred email address. This helps Intel recognize you as a valued customer in the support forum and insures that you will be notified of product updates. You can read Intel's Online Privacy Notice Summary if you have any questions regarding the use of your email address for software product registration.

What's New in Intel DAAL 2016 Beta Update 1

  • Introduced new C++ and Java* APIs for enhanced usability.
  • Added Hadoop* samples.
  • Introduced support for the following numeric tables:
    • Packed symmetric matrix (lower and upper layouts).
    • Packed triangular matrix (lower and upper).
  • Introduced shared pointers and data collections for enhanced usability of data management.
  • Extended interfaces for compression classes for enhanced usability.
  • Other improvements and bug fixes.

What's New in Intel DAAL 2016 Beta Initial Release

  • C++ and Java programming languages API.
  • Data mining and analysis algorithms for
    • Computing correlation distance and Cosine distance
    • PCA (Correlation, SVD)
    • Matrix decomposition (SVD, QR, Cholesky)
    • Computing statistical moments
    • Computing variance-covariance matrices
    • Univariate and multivariate outlier detection
    • Association rule mining
  • Algorithms for supervised and unsupervised machine learning:
    • Linear regressions
    • Naïve Bayes classifier
    • AdaBoost, LogitBoost, and BrownBoost classifiers
    • SVM
    • K-Means clustering
    • Expectation Maximization (EM) for Gaussian Mixture Models (GMM)
  • Support for local and distributed data sources:
    • In-file and in-memory CSV
    • MySQL
    • HDFS
  • Support for Resilient Distributed Dataset (RDD) objects for Apache Spark*. 
  • Data compression and decompression:
    • ZLIB
    • LZO
    • RLE
    • BZIP2
  • Data serialization and deserialization.

Known Limitations:

  • Windows* only: Microsoft Visual Studio* 2010 is not supported in current version.

Also, see the announcement of the initial Beta release

Product Contents

Intel DAAL consists in one package for both IA-32 and Intel® 64 architectures.

Technical Support

If you did not register your Intel software product during installation, please do so now at the Intel® Software Development Products Registration Center. Registration entitles you to free technical support, product updates, and upgrades for the duration of the support term.

For general information about Intel technical support, product updates, user forums, FAQs, tips and tricks and other support questions, please visit http://www.intel.com/software/products/support/.

For technical information about Intel DAAL, including FAQs, tips and tricks, and other support information, please visit the Intel DAAL user forum: https://software.intel.com/en-us/forums/intel-data-analytics-acceleration-library.

License Definitions

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.

Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.

Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site

Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details.

BlueMoon, BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Inside, Cilk, Core Inside, E-GOLD, Flexpipe, i960, Intel, the Intel logo, Intel AppUp, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Insider, the Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel Sponsors of Tomorrow., the Intel Sponsors of Tomorrow. logo, Intel StrataFlash, Intel vPro, Intel Xeon Phi, Intel XScale, InTru, the InTru logo, the InTru Inside logo, InTru soundmark, Itanium, Itanium Inside, MCS, MMX, Moblin, Pentium, Pentium Inside, Puma, skoool, the skoool logo, SMARTi, Sound Mark, Stay With It, The Creators Project, The Journey Inside, Thunderbolt, Ultrabook, vPro Inside, VTune, Xeon, Xeon Inside, X-GOLD, XMM, X-PMU and XPOSYS are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

Copyright © 2002-2015, Intel Corporation. All rights reserved.

Intel® Data Analytics Acceleration Library 2016 Beta Installation Guide

$
0
0

Please see the following links to the online resources and documents for the latest information regarding Intel DAAL:

These instructions assume a standalone installation of Intel® Data Analytics Acceleration Library (Intel® DAAL). If your copy of Intel® DAAL was included as part of one of our "suite products" (e.g., Intel® Parallel Studio XE) your installation procedure may be different than that described below; in which case, please refer to the readme and installation guides for your "suite product" for specific installation details.

Before installing Intel DAAL, check the Product Downloads section of Intel Registration Center to see if a newer version of the library is available. The version listed in your electronic download license letter may not be the most current version available.

The installation of the product requires a valid license file or serial number. If you are evaluating the product, you can also choose the "Evaluate this product (no serial number required)" option during installation.
If you have a previous version of Intel DAAL installed you do not need to uninstall it before installing a new version. If you choose to uninstall the older version, you may do so at any time.

Directory Layout

Intel® DAAL installs in the daal subdirectory inside <parent product directory>. For example, when Intel® DAAL is installed as part of the Intel® Parallel Studio XE Composer Edition for C++, <parent product directory> is as follows:

  • On Windows* OS: C:\Program files (x86)\Intel_sw_development_tools\compilers_and_libraries_2016.x.xxx\windows (on some systems, instead of Program Files (x86), the directory name is Program Files)
  • On OS X*:  /opt/intel/compilers_and_libraries_2016.x.xxx/mac
  • Linux* OS: /opt/intel/compilers_and_libraries_2016.x.xxx/linux

In the text that follows, <arch> refers to the primary processor architecture, such as ia32 or intel64 and <DAALROOT> refers to the Intel® DAAL installation directory. Additionally, replace the '\' below with a '/' on Linux* OS or OS X*.

Within the Intel® DAAL root installation directory you will find a collection of subdirectories.

<DAALROOT>\daal\bin - Environment variables.

<DAALROOT>\daal\lib\<arch> -  lib directories contain the actual library files that you must link against when building your Intel® DAAL application. These include the Intel® DAAL static library files and the stub library files needed to build applications that link with the Intel® DAAL dynamic or shared library files. This library does not contain the dynamic library files you need to redistribute with your application if you choose to link against the shared library files, those files are stored elsewhere (see below).

<DAALROOT>\compiler\lib\<arch> - The common compiler redist directory contains additional dynamic libraries that you may need to distribute with your Intel® DAAL application when linking against the shared libraries. In particular, the Intel® DAAL library utilizes the Intel® OpenMP* library to implement multi-threading, and those OpenMP library files can be found in this directory.

<DAALROOT>\documentation\en\daal -  Intel® DAAL documentation.

<DAALROOT>\daal\examples\cpp - Actual examples in C++ language.

<DAALROOT>\daal\examples\java - Actual examples in Java* language.

<DAALROOT>\daal\examples\data - Data files for C++ and Java* examples.

<DAALROOT>\daal\samples - Samples in C++ and Java* languages that you can include in your program.

<DAALROOT>\daal\include - The interface files defining data types and function prototypes for Intel® DAAL. See the Getting Started page for more information.

<DAALROOT>\redist\<arch>\daal - Intel® DAAL dynamic libraries that you may distribute with your Intel® DAAL application when linking against the shared libraries.

<DAALROOT>\tbb\lib\<arch>\vc_mt - This common directory contains additional dynamic libraries that you may need to distribute with your Intel® DAAL application when linking against the shared libraries. In particular, these are dynamic libraries from Intel® Threading Building Blocks (Intel® TBB) that Intel® DAAL utilizes for treading implementation.

Installing Intel DAAL on Windows* OS

You can install multiple versions of Intel DAAL and any combination of 32-bit and 64-bit variations of the library on your development system.

These instructions assume you to have an Internet connection. The installation program will automatically download a license key to your system. If you do not have an internet connection, see the manual installation instructions below.

Interactive installation on Windows* OS

  1. If you received the Intel DAAL product as a download, double-click on the downloaded file to begin.
  2. You will be asked to choose a target directory ("C:\Program Files\Intel\Downloads"  by default) in which the contents of the self-extracting setup file will be placed before the actual library installation begins. You can choose to remove or keep temporarily extracted files after installation is complete. You can safely remove the files in this "downloads"  directory if you need to free up disk space; however, deleting these files will impact your ability to change your installation options at a later time using the add/remove applet, you will always be able to uninstall.)
  3. Click Next when the installation wizard appears.
  4. If you agree with the End User License Agreement, click Next to accept the license agreement.
  5. License Activation Options:
    • If you do have an Internet connection, skip this step and proceed to the next numbered step (below).
    • If you do not have an Internet connection, or require a floating or counted license installation, choose Alternative Activation and click Next; there will be two options to choose from:
      • Activate Offline:  requires a License File.
      • Use a License manager: Floating License activation
  6. Enter your serial number to activate and install the product.
  7. Activation completed. Click Next to continue.
  8. If there is package from another update of Parallel Studio XE installed, you will be able to select update mode on Choose Product Update Mode dialog:
    1. I want to apply this update to the existing version.
      Using this option will result in the original version being replaced by the updated version.
    2. I want to install this update separate from the existing version.
      Using this option will result in the update being installed in a different location, leaving the existing version unchanged.
  9. The Installation Summary dialog box opens to show the summary of your installation options (chosen components, destination folder, etc.). Click Install to start installation (proceed to step 15) or click Customize to change settings. If you select "Customize", follow steps 10-14. Installation summary

  10. In the Architecture Selection dialog box, select the architecture of the platform where your software will run.
  11. In the Choose a Destination Folder dialog box, choose the installation directory. By default, it is C:\Program Files\Intel_sw_development_tools. You may choose a different directory. All files are installed into the Intel Parallel Studio XE 2016 subdirectory (in the case if you chose I want to install this update separate from the existing version, all files will be installed into the Composer XE 2015.0.xxx is the package number).
  12. Package contains components for integration into Microsoft Visual Studio*. You are able to select the Microsoft Visual Studio product(s) for integration on the Choose Integration target dialog box.
  13. If Microsoft Compute Cluster Pack* is present, and the installation detects that the installing system is a member of a cluster, the dialog box will be shown which provides you an option to install the product on all visible  nodes of the cluster or on the current node only(by default installation on all visible nodes is performed).
  14. The Installation Summary dialog box opens to show the summary of your installation options (chosen components, destination folder, etc.). Click Install to start installation.
  15. Click Finish in the final screen to exit the Intel Software Setup Assistant. 

Online Installation on Windows* OS

The default electronic installation package for Intel DAAL for Windows now consists of a smaller installation package that dynamically downloads and then installs packages selected to be installed. This requires a working internet connection and potentially a proxy setting if you are behind an internet proxy. Full packages are provided alongside where you download this online install package if a working internet connection is not available.

Silent Installation on Windows* OS

Silent installation enables you to install Intel DAAL on a single Windows* machine in a batch mode, without input prompts. Use this option if you need to install on multiple similarly configured machines, such as cluster nodes.

To invoke silent installation:

  1. Go to the folder where the Intel DAAL package was extracted during unpacking; by default, it is the C:\Program Files\Intel\Download\w_daal_2016.y.xxx folder.
  2. Run setup.exe, located in this folder: setup.exe [command arguments]

If no command is specified, the installation proceeds in the Setup Wizard mode. If a command is specified, the installation proceeds in the non-interactive (silent) mode.

The table below lists possible values of  and the corresponding arguments.

Command

Required Arguments

Optional Arguments

Action

install

output=<file>,
eula={accept|reject}
installdir=<installdir>,
license=<license>,
sn=<s/n>,
log=<log file>

Installs the product as specified by the arguments.

Use the output argument to define the file where the output will be redirected. This file contains all installer's messages that you may need: general communication, warning, and error messages.

Explicitly indicate by eula=accept that you accept the End-user License Agreement.

Use the license argument to specify a file or folder with the license to be used to activate the product. If a folder is specified, the installation program searches for *.lic files in the specified folder. You can specify multiple files/folders by supplying this argument several times or by concatenating path strings with the ";" separator.

Use the sn argument to choose activation of the product through a serial number. This activation method requires Internet connection.

Do not use the sn and license arguments together because they specify alternative activation methods. If you omit both arguments, the installer only checks whether the product is already activated.

Use the log argument to specify the location for a log file. This file is used only for debugging. Support Engineers may request this file if your installation fails.

remove

output=<file>log=<log file>

Removes the product. See the description of the install command for details of the output and log arguments.

repair

output=<file>

log=<log file>

Repairs the existing product installation. See the description of the install command for details of the output and log arguments.

For example, the command line
 setup.exe install -output=C:\log.txt -eula=accept
launches silent installation that prints output messages to the C:\log.txt file.

License File Installation for Windows* OS

If you have an evaluation license and decide to upgrade to a commercial license, you must complete the following steps after obtaining the commercial serial number:

  1. Replace your evaluation license file (.lic file) with the commercial license file you received in the license file directory (the default license directory is "C:\Program Files\Common Files\Intel\Licenses").
  2. Register the new serial number at https://registrationcenter.intel.com.
  3. Re-installation of Intel DAAL is not required.

Uninstalling Intel DAAL for Windows* OS

To uninstall Intel DAAL, select Add or Remove Programs from the Control Panel and locate the version of Intel DAAL you wish to uninstall.

Note: Uninstalling Intel DAAL does not delete the corresponding license file.

Installing Intel DAAL on Linux* OS

You can install multiple versions of Intel DAAL and any combination of 32-bit and 64-bit variations of the library on your development system.

These instructions assume you to have an Internet connection. The installation program will automatically download a license key to your system. If you do not have an Internet connection, see the manual installation instructions below.

Interactive installation on Linux* OS

  1. If you received the product as a downloadable archive, first unpack the Intel DAAL package
    tar -zxvf name_of_downloaded_file
  2. Change the directory (cd) to the folder containing unpacked files.
  3. Run the installation script and follow the instructions in the dialog screens that are presented:
    > ./install.sh
  4. The install script checks your system and displays any optional and critical prerequisites necessary for a successful install. You should resolve all critical issues before continuing the installation. Optional issues can be skipped, but it is strongly recommended that you fix all issues before continuing with the installation.

GUI installation on Linux* OS

If on a Linux* system with GUI support, the installation will provide a GUI-based installation. If a GUI is not supported (for example if running from an ssh terminal), a command-line installation will be provided.

To install Intel DAAL for Linux* OS  in GUI mode, run shell script (install_GUI.sh).
If a GUI is not supported (for example, if running from an ssh terminal), a command-line installation will be provided.

Silent Installation on Linux* OS

To run the silent install, follow these steps:

  1.  If you received the product as a downloadable archive, first unpack the Intel DAAL package
    >tar -zxvf name_of_downloaded_file
  2. Change the directory (cd) to the folder containing unpacked files.
  3. Edit the configuration file silent.cfg following the instructions in it:
    1.  Accept End User License Agreement by specifying ACCEPT_EULA=accept instead of default "decline" value;
    2. Specify activation option for the installation.
      • Default option is to use existing license (ACTIVATION_TYPE=exist_lic), please make sure that a working product license file is in place before beginning. The file should be world-readable and located in a standard Intel license file directory, such as /opt/intel/licenses or ~/licenses.
      • To use another way of activation, change the value of ACTIVATION_TYPE variable. You may also need to change the value of ACTIVATION_SERIAL_NUMBER andACTIVATION_LICENSE_FILE variable for specific activation options.
  4. Run the silent install:
    >./install.sh --silent ./silent.cfg

Tip: You can run install interactively and record all the options into custom configuration file using the following command.
>./install.sh  --duplicate "./my_silent_config.cfg"
After this you can install the package on other machines with the same installation options using
>./install.sh --silent "./my_silent_config.cfg"

License File Installation for Linux* OS

If you have an evaluation license and decide to upgrade to a commercial license, you must complete the following steps after obtaining the commercial serial number:

  1. Replace your evaluation license file (.lic file) with the commercial license file you received in the license file directory (the default license directory is /opt/intel/licenses).
  2. Register the new serial number at https://registrationcenter.intel.com.
  3. Re-installation of Intel DAAL is not required.

Online Installation on Linux* OS

The default electronic installation package for Intel DAAL for Linux consists of a smaller installation package that dynamically downloads and then installs packages selected to be installed. This requires a working internet connection and potentially a proxy setting if you are behind an internet proxy. Full packages are provided alongside where you download this online install package if a working internet connection is not available.

Offline Installation on Linux* OS

If the system where Intel DAAL will be installed disconnected from internet, product may be installed in offline mode.
To install product offline user must provide to installer full path to license file.

License file (.lic file) is included as an attachment to email which sends after purchasing and registration product on IRC. User may request to resend .lic file from IRC. To achieve this go to "My Intel Products" page, select needed update for Intel DAAL from "Download Latest Update" column. When page with information about selected product update will be opened, click on "Manage" reference in "Licenses" column. When "Manage License" page will be opened, press button "Resend license file to my email".

  1. If product installs in GUI mode: on "Activation options" dialog select "Choose alternative activation" radio button, press "Next" button. On following dialog select "Activate offline" radio button, press "Next" button. On next dialog type full path to license file and press "Next" button.
  2. If product installs in interactive mode: on step 3 "Activation step" select point 4 - "I want to activate by using a license file, or by using Intel(R) Software". On next step choose point 1 - "Activate offline [default]" and type full path to license file.
  3. If product installs in silent mode: in the file silent.cfg set value: license_file for variable: ACTIVATION_TYPE, set full path to license file to variable: ACTIVATION_LICENSE_FILE

Uninstalling Intel DAAL for Linux* OS

If you installed as root, you will need to log in as root.

To uninstall Intel DAAL run the uninstall script: <DAAL-install-dir>/uninstall.sh.

Alternatively, you may use GUI mode for uninstall Intel DAAL for Linux* OS. First, run shell script install_GUI.sh, then select Remove option from menu and press "next" button.

If you installed in the default directory, use:
> /opt/intel/compilers_and_libraries_2016.x.xxx/linux/daal

Uninstalling Intel DAAL will not delete your license file(s).

Intel® Data Analytics Acceleration Library 2016 Beta System Requirements

$
0
0

Please see the following links to the online resources and documents for the latest information regarding Intel DAAL:

System Requirements

The Intel® DAAL supports the IA-32 and Intel® 64 architectures. For a complete explanation of these architecture names please read the following article:
Intel Architecture Platform Terminology for Development Tools.

The lists below pertain only to the system requirements necessary to support application development with Intel® DAAL. Please review your compiler (gcc*, Microsoft Visual Studio* or Intel® Compiler Pro) hardware and software system requirements, in the documentation provided with that product to determine the minimum development system requirements necessary to support your compiler product.

Supported Operating Systems

  • Windows 8* (IA-32/Intel® 64)
  • Windows 8.1* (IA-32/Intel® 64)
  • Windows 7* (IA-32/Intel® 64) - Note: SP1 is required for use of Intel® AVX instructions
  • Windows Server* 2008 R2 SP1 and SP2 (IA-32/Intel® 64)
  • Windows HPC Server 2008 R2 (IA-32/Intel® 64)
  • Windows Server* 2012 (IA-32/Intel® 64)
  • Red Hat* Enterprise Linux* 6 (IA-32 / Intel® 64)
  • Red Hat* Enterprise Linux* 7 (IA-32 / Intel® 64)
  • Red Hat Fedora* core 20 (IA-32 / Intel® 64)
  • Red Hat Fedora* core 21 (IA-32 / Intel® 64)
  • SUSE Linux Enterprise Server* 11
  • SUSE Linux Enterprise Server* 12
  • Debian* GNU/Linux 6 (IA-32 / Intel® 64)
  • Debian* GNU/Linux 7 (IA-32 / Intel® 64)
  • Ubuntu* 12.04 (Intel® 64)
  • Ubuntu* 13.04 (IA-32 / Intel® 64)
  • Ubuntu* 14.04 LTS (IA-32/Intel® 64
  • Ubuntu* 15.04 (IA-32 / Intel® 64)
  • OS X* 10.10 (Xcode 6.0)

Note: Intel® DAAL is expected to work on many more Linux distributions as well. Let us know if you have trouble with the distribution you use.

Supported C/C++* compilers for Windows* OS:

  • Intel® C++ Compiler 15.0 for Windows* OS
  • Intel® C++ Compiler 16.0 for Windows* OS
  • Microsoft Visual Studio* 2012 - help file and environment integration
  • Microsoft Visual Studio* 2013 - help file and environment integration

Supported C/C++* compilers for Linux* OS:

  • Intel® C++ Compiler 15.0 for Linux* OS
  • Intel® C++ Compiler 16.0 for Linux* OS
  • GNU Compiler Collection 4.9 and later

Supported Java* compilers:

  • Java* SE 7 from Sun Microsystems, Inc.
  • Java* SE 8 from Sun Microsystems, Inc.

MPI implementations that Intel® DAAL for Windows* OS has been validated against:

MPI implementations that Intel® DAAL for Linux* OS has been validated against:

SQL

  • MySQL 5.0-5.7
  • MySQL 4.0-4.1

Hadoop* implementations that Intel® DAAL has been validated against:

  • Hadoop* 2.6.0

Note: Intel® DAAL is expected to work on many more Hadoop* distributions as well. Let us know if you have trouble with the distribution you use.

Spark* implementations that Intel® DAAL has been validated against:

  • Spark* 1.1.1

Note: Intel® DAAL is expected to work on many more Spark* distributions as well. Let us know if you have trouble with the distribution you use.

Deprecation Notices:

  • Red Hat* Enterprise Linux* 5 is deprecated
    • Support for Red Hat* Enterprise Linux* 5 has been deprecated and will be removed in a future release.
  • Visual Studio* 2010 is deprecated
    • Support for Visual Studio* 2010 has been deprecated and will be removed in a future release.

Intel® Math Kernel Library Inspector-executor Sparse BLAS Routines

$
0
0

Intel® Math Kernel Library (Intel® MKL) 11.3 Beta, released in April 2015, offers the inspector-executor API for Sparse BLAS (SpMV 2). This API divides operations into two steps. During an initial analysis stage, the API inspects the matrix sparsity pattern and applies matrix structure changes. In subsequent routine calls, this information is reused in order to improve performance. 

This inspector-executor API supports key Sparse BLAS operations for iterative sparse solvers and covers all the functionality available in the classic Sparse BLAS implementation available in Intel MKL:

  • Sparse matrix-vector multiplication
  • Sparse matrix-matrix multiplication with sparse or dense result
  • Triangular system solution
  • Sparse matrix addition

The PDF file attached below is the reference manual (initial version) of the inspector-executor API. Future Intel MKL releases will include this reference manual in the regular Intel MKL product documentation. 

Significant Scalability and Performance Improvement for Intel® MKL PARDISO on SMP Systems

$
0
0

Intel® MKL 11.3 Beta (released in April 2015) contains significant performance and scalability improvements for the direct sparse solver (a.k.a. Intel MKL PARDISO), on SMP systems. These improvements particularly benefit the Intel Xeon Phi coprocessors and Intel Xeon processors with large core counts. As an example, the chart below shows a 1.7x to 2.5x speedup of Intel MKL 11.3 Beta over Intel MKL 11.2, when using the PARDISO to solve various sparse matrices on an Intel Xeon Phi coprocessor with 61 cores. In this example, PARDISO runs natively on the coprocessor, meaning the host CPU is not involved in the computation. A total number of 240 threads (60 cores, 4 threads each) are used during the execution.

 

The improvements are applied to all three phases of using PARDISO to solve a linear system, namely, reordering, factorization, and solving. Users do not need to turn any special knob (e.g. special environment settings or service function calls) to get the improved performance. It is available out of the box, as long as you use Intel MKL 11.3 Beta or later. 

There is a tip to get the best possible scalability, though. In addition to the usual guidelines of getting good performance from MKL functions, as documented here and here, we recommend PARDISO users to set KMP_AFFINITY to "scatter" when the number of threads is less than or equal to 60. And, set KMP_AFFINITY to "compact" when the number of threads is more than 60. 

Limitations: We are working hard to bring this improvement into all functionality areas of PARDISO. But at present (as of April 2015), you may not see the same extent of performance improvement if you use one of these PARDISO features: Schur complement; diagonal pivot control; user defined permutation; reordering with nested levels; and the Conditional Numeric Reproducibility (CNR) feature. 

 

 

 


Introducing Batch GEMM Operations

$
0
0

The general matrix-matrix multiplication (GEMM) is a fundamental operation in most scientific, engineering, and data applications. There is an everlasting desire to make this operation run faster. Optimized numerical libraries like Intel® Math Kernel Library (Intel® MKL) typically offer parallel high-performing GEMM implementations to leverage the concurrent threads supported by modern multi-core architectures. This strategy works well when multiplying large matrices because all cores are used efficiently. When multiplying small matrices, however, individual GEMM calls may not optimally use all the cores. Developers wanting to improve utilization usually batch multiple independent small GEMM operations into a group and then spawn multiple threads for different GEMM instances within the group. While this is a classic example of an embarrassingly parallel approach, making it run optimally requires a significant programming effort that involves threads creation/termination, synchronization, and load balancing. That is, until now. 

Intel MKL 11.3 Beta (part of Intel® Parallel Studio XE 2016 Beta) includes a new flavor of GEMM feature called "Batch GEMM". This allows users to achieve the same objective described above with minimal programming effort. Users can specify multiple independent GEMM operations, which can be of different matrix sizes and different parameters, through a single call to the "Batch GEMM" API. At runtime, Intel MKL will intelligently execute all of the matrix multiplications so as to optimize overall performance. Here is an example that shows how "Batch GEMM" works:

Example

Let A0, A1 be two real double precision 4x4 matrices; Let B0, B1 be two real double precision 8x4 matrices. We'd like to perform these operations:

C0 = 1.0 * A0 * B0T  , and C1 = 1.0 * A1 * B1T

where C0 and C1 are two real double precision 4x8 result matrices. 

Again, let X0, X1 be two real double precision 3x6 matrices; Let Y0, Y1 be another two real double precision 3x6 matrices. We'd like to perform these operations:

Z0 = 1.0 * X0 * Y0T + 2.0 * Z0and Z1 = 1.0 * X1 * Y1T + 2.0 * Z1

where Z0 and Z1 are two real double precision 3x3 result matrices.

We could accomplished these multiplications using four individual calls to the standard DGEMM API. Instead, here we use a single "Batch GEMM" call for the same with potentially improved overall performance. We illustrate this using the "cblas_dgemm_batch" function as an example below.

#define    GRP_COUNT    2

MKL_INT    m[GRP_COUNT] = {4, 3};
MKL_INT    k[GRP_COUNT] = {4, 6};
MKL_INT    n[GRP_COUNT] = {8, 3};

MKL_INT    lda[GRP_COUNT] = {4, 6};
MKL_INT    ldb[GRP_COUNT] = {4, 6};
MKL_INT    ldc[GRP_COUNT] = {8, 3};

CBLAS_TRANSPOSE    transA[GRP_COUNT] = {'N', 'N'};
CBLAS_TRANSPOSE    transB[GRP_COUNT] = {'T', 'T'};

double    alpha[GRP_COUNT] = {1.0, 1.0};
double    beta[GRP_COUNT] = {0.0, 2.0};

MKL_INT    size_per_grp[GRP_COUNT] = {2, 2};

// Total number of multiplications: 4
double    *a_array[4], *b_array[4], *c_array[4];
a_array[0] = A0, b_array[0] = B0, c_array[0] = C0;
a_array[1] = A1, b_array[1] = B1, c_array[1] = C1;
a_array[2] = X0, b_array[2] = Y0, c_array[2] = Z0;
a_array[3] = X1, b_array[3] = Y1, c_array[3] = Z1;

// Call cblas_dgemm_batch
cblas_dgemm_batch (
        CblasRowMajor,
        transA,
        transB,
        m,
        n,
        k,
        alpha,
        a_array,
        lda,
        b_array,
        ldb,
        beta,
        c_array,
        ldc,
        GRP_COUNT,
        size_per_group);



The "Batch GEMM" interface resembles the GEMM interface. It is simply a matter of passing arguments as arrays of pointers to matrices and parameters, instead of as matrices and the parameters themselves. We see that it is possible to batch the multiplications of different shapes and parameters by packaging them into groups. Each group consists of multiplications of the same matrices shape (same m, n, and k) and the same parameters. 

Performance

While this example does not show performance advantages of "Batch GEMM", when you have thousands of independent small matrix multiplications then the advantages of "Batch GEMM" become apparent. The chart below shows the performance of 11K small matrix multiplications with various sizes using "Batch GEMM" and the standard GEMM, respectively. The benchmark was run on a 28-core Intel Xeon processor (Haswell). The performance metric is Gflops, and higher bars mean higher performance or a faster solution.

The second chart shows the same benchmark running on a 61-core Intel Xeon Phi co-processor (KNC). Because "Batch GEMM" is able to exploit parallelism using many concurrent multiple threads, its advantages are more evident on architectures with a larger core count. 

Summary

This article introduces the new API for batch computation of matrix-matrix multiplications. It is an ideal solution when many small independent matrix multiplications need to be performed. "Batch GEMM" supports all precision types (S/D/C/Z). It has Fortran 77 and Fortran 95 APIs, and also CBLAS bindings. It is available in Intel MKL 11.3 Beta and later releases. Refer to the reference manual for additional documentation.  

 

Intel DAAL Code Samples

$
0
0

Some users were asking for code samples of using Intel DAAL. Many examples of using DAAL API, as well as examples of using DAAL in Hadoop* and Spark* environment can be found in the installation location (after you install DAAL). To make it more accessible, we've extracted a few code samples and publish them here. Users can download them from the attachments and browse the code without having to install DAAL first. This is a great way to have a peek at DAAL API and understand how it works.

Included here are three code samples:

  • Principle Component Analysis (PCA) - This C++ code illustrate the basic usage of DAAL API.
  • Apache Spark* example - This Java code shows how to interact with Spark* RDD (Resilient Distributed Datasets).
  • Apache Hadoop* example - This Java code shows how to use DAAL functions with Hadoop MapReduce and how to interact with HDFS.

Please let us know your feedback in the comments. 

Intel® Data Analytics Acceleration Library 2016 Beta Update 2 is now available!

$
0
0

Intel® Data Analytics Acceleration Library (Intel® DAAL) is a C++ and Java API library of optimized analytics building blocks for all data analysis stages, from data acquisition to data mining and machine learning. It is a library essential for engineering high performance data application solutions. Intel DAAL 2016 Beta Update 3 packages are now available for downloads. To join the free Beta program and get instructions on downloading the software, follow the links below:

What's New in Intel DAAL 2016 Beta Update 2

  • Introduced support for online and distributed processing modes for Naïve Bayes classifiers.
  • Introduced support for online and distributed processing modes for variance-covariance matrix computation.  
  • Added default initialization procedure in EM for GMM method.
  • Extended Java API to allow users to allocate memory for storing partial or final results of algorithms.
  • Added a service method for querying the library's version and the underlying hardware information.
  • Extended error handling capabilities.
  • Other improvements and bug fixes.

See details in the release notes.

Java Eclipse Runtime Error

$
0
0

Hi

I downloaded the daal update 2 and installed it on a windows system. I have been able to run the java examples using the .bat files. But, when I set up an Eclipse program, I am running into problems. The daal.jar is recognized (build goes through) but when I run an example, I get the following error:

Here I am running the simple LibraryVersionInfoExample

Exception in thread "main" java.lang.UnsatisfiedLinkError: no JavaAPI in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1865)
    at java.lang.Runtime.loadLibrary0(Runtime.java:870)
    at java.lang.System.loadLibrary(System.java:1122)
    at com.intel.daal.service.LibraryVersionInfo.<clinit>(LibraryVersionInfo.java:47)
    at com.intel.daal.examples.services.LibraryVersionInfoExample.main(LibraryVersionInfoExample.java:37)

Is this a recognized problem or is it just my setup.

Regards

Hemanth

zlib

$
0
0

I'm confused, is there supposed to be zlib in DAAL or not? I was expecting to find the Intel fork of zlib there.

Viewing all 190 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>