[molpro-user] Solid State Drives

Joseph Lane jlane at waikato.ac.nz
Thu May 5 05:46:41 BST 2011


Dear Jacek

Thank you for sharing those results. Do you (or does anyone else on
the list) know what the major I/O bottleneck is for medium-large
CCSD(T) calculations with Molpro? Is it sequential read, sequential
write, seek times? We are in the process of ordering a new server and
are trying to evaluate whether to purchase a conventional HDD SAS RAID
0 array or some form of SSD setup.


Kind regards


Jo




On Thu, May 5, 2011 at 2:05 AM, Jacek Antoni Klos <jklos at umd.edu> wrote:
> Dear Jo
>
> Yes, I did some IO_test benchmarks and small_normal_ccsd bench using 512GB SSD that comes with Apple Mac Pro machines. Apple used Toshiba brand SSDs.
>
> Here are some excerpts from benchmark results on SSD as a scratch:
>
> IO_TEST:  200MB  file
>
>  Test I/O on file length   204.5Mb with transfer size  0.50Mb (   409 segments) and  10000 seeks
>
>  Test name   Description                                  Time/s      Rate/(Mb)/s
>  SW          Sequential write, 1 process file 1             1.37         149.27
>  SR          Sequential read, 1 process file 1              0.12        1704.16
>  SL          Seek, 1 process                                0.08      125000.12
>
>
>  **********************************************************************************************************************************
>  PROGRAMS   *        TOTAL    IOTEST    IOTEST
>  CPU TIMES  *         0.10      0.00      0.00
>  REAL TIME  *         3.77 SEC
>  DISK USED  *       214.76 MB
>  **********************************************************************************************************************************
>
>
>
> IO_TEST:  10GB file
>
>  Test I/O on file length 10239.5Mb with transfer size  0.50Mb ( 20479 segments) and  10000 seeks
>
>  Test name   Description                                  Time/s      Rate/(Mb)/s
>  SW          Sequential write, 1 process file 1            58.61         174.71
>  SR          Sequential read, 1 process file 1              2.59        3953.48
>  SL          Seek, 1 process                                0.08      124999.75
>
>
>  **********************************************************************************************************************************
>  PROGRAMS   *        TOTAL    IOTEST    IOTEST
>  CPU TIMES  *         0.36      0.08      0.19
>  REAL TIME  *       124.01 SEC
>  DISK USED  *        10.74 GB
>  **********************************************************************************************************************************
>
>
>
>
> IO_TEST: two simultaneous jobs using same SSD as scratch each 10GB file
>
>  Test I/O on file length 10239.5Mb with transfer size  0.50Mb ( 20479 segments) and  10000 seeks
>
>  Test name   Description                                  Time/s      Rate/(Mb)/s
>  SW          Sequential write, 1 process file 1           118.73          86.24
>  SR          Sequential read, 1 process file 1              2.68        3820.71
>  SL          Seek, 1 process                                0.08      125000.12
>
>
>  **********************************************************************************************************************************
>  PROGRAMS   *        TOTAL    IOTEST    IOTEST
>  CPU TIMES  *         0.48      0.07      0.32
>  REAL TIME  *       247.99 SEC
>  DISK USED  *        10.74 GB
>  **********************************************************************************************************************************
>
>
>
> SMALL_NORMAL_CCSD benchmark on SSD :
>
>  CPU and I/O time analysis:
>  Routine           CPU(%)      MFLOP     SYS     WALL    Call   Routine           CPU(%)      MFLOP     SYS     WALL    Call
>  TOTAL:          514.5(99.9)    0.0      6.9    525.6      1    TRIPLES:        355.9(69.2) 6872.5      3.4    359.3      1
>  KEXTA:           96.1(18.7)    0.0      0.7     99.4      8    CCVIJ3:          20.8( 4.0)10322.2      0.2     21.0      8
>  YZMAT:           18.4( 3.6)10759.8      0.0     18.3   1152    CCKINT:          10.3( 2.0) 2145.2      0.5     11.8      1
>  CCVIJ1:           6.0( 1.2) 1588.4      0.1      6.1      8    CC3EXT:           3.8( 0.7)    0.0      0.4      4.3      7
>  CCKEXT:           2.1( 0.4) 4956.1      0.1      2.8      8    CCMP2:            0.4( 0.1) 1116.8      0.1      0.5      9
>  CCDIIS:           0.4( 0.1)    0.0      0.5      1.4     17    CIOPTR:           0.3( 0.1) 9649.6      0.0      0.3      1
>  CCVIJ2:           0.2( 0.0) 3186.9      0.0      0.2      8    CIORTH:           0.0( 0.0)    0.0      0.1      0.0      1
>  TRANSFORM:        0.0( 0.0)    0.0      0.0      0.0      1
>
>
>  PROGRAM          ENERGY       USER    SYS   TOTCPU  ELAPSED  USER(%)  TOTCPU(%)
>  AOINT          0.00000000     6.11   0.30     6.41     6.42    95.17      99.84
>  AOSORT         0.00000000     3.45   1.28     4.73    10.75    32.09      44.00
>  INT(TOT)       0.00000000     9.56   1.58    11.14    17.17    55.68      64.88
>  HF-SCF      -156.15930716     9.41   0.85    10.26    10.27    91.63      99.90
>  TRANSFORM      0.00000000    10.54   0.55    11.09    12.14    86.82      91.35
>  CCSD        -156.83775164   148.06   2.18   150.24   154.14    96.06      97.47
>  TRIPLES       -0.02781518   355.94   3.37   359.31   359.32    99.06     100.00
>  CCSD(T)     -156.86556682   514.57   6.11   520.68   525.65    97.89      99.05
>  TOTAL          0.00000000   533.54   8.54   542.08   553.09    96.47      98.01
>
>
>
>
>
> Now we switch temporary file directory to be 3TB RAID0 composed of 3 7200RPM 1TB disks.
>
> Here is IO bench with 10GB scratch file using our RAID0 composed of 3 1TB disks of 7200RPM speed as a scratch directory:
>
> IO_TEST 200 MB
>
>  Test I/O on file length   204.5Mb with transfer size  0.50Mb (   409 segments) and  10000 seeks
>
>  Test name   Description                                  Time/s      Rate/(Mb)/s
>  SW          Sequential write, 1 process file 1             0.76         269.08
>  SR          Sequential read, 1 process file 1              0.19        1076.32
>  SL          Seek, 1 process                                0.15       66666.73
>
> IO_TEST 10GB
>
>  Test I/O on file length 10239.5Mb with transfer size  0.50Mb ( 20479 segments) and  10000 seeks
>
>  Test name   Description                                  Time/s      Rate/(Mb)/s
>  SW          Sequential write, 1 process file 1            33.09         309.44
>  SR          Sequential read, 1 process file 1              2.84        3605.46
>  SL          Seek, 1 process                                0.15       66666.73
>
>
> As a single drive SSD was faster than the single regular drive.  I think single disk of our RAID has around 90-100 MB/s transfer, so 3 of them in RAID0 give around 300 MB/sek. Also Apple does not use the best SSD drives on the market, Intel's one or OCZ models would perform better.
>
> Hope that helps
> Jacek Klos
> Chemistry at University of Maryland
>
>
>
>
>
>
>
> On May 3, 2011, at 9:46 PM, Joseph Lane wrote:
>
>> Has anyone had experience running Molpro using solid state drives for
>> the temporary files instead of conventional hard drives? I am
>> particularly interested in any benchmark jobs obtained at the coupled
>> cluster theory level of theory.
>>
>>
>> Kind regards
>>
>>
>> Jo
>>
>> --
>> --------------------------------------------------------
>>
>> Dr Joseph Lane
>>
>> Lecturer in Physical and Theoretical Chemistry
>>
>> Department of Chemistry
>>
>> University of Waikato
>>
>> Private Bag 3105
>>
>> Hamilton 3240
>>
>> New Zealand
>>
>>
>>
>> Ph: +64-7-838-4466 ext 8549
>>
>> Fax: +64-7-838-4219
>> _______________________________________________
>> Molpro-user mailing list
>> Molpro-user at molpro.net
>> http://www.molpro.net/mailman/listinfo/molpro-user
>
>
>
>
>



-- 
--------------------------------------------------------

Dr Joseph Lane

Lecturer in Physical and Theoretical Chemistry

Department of Chemistry

University of Waikato

Private Bag 3105

Hamilton 3240

New Zealand



Ph: +64-7-838-4466 ext 8549

Fax: +64-7-838-4219



More information about the Molpro-user mailing list