Intel 14th Gen Core RaptorLake-Refresh (i7 14700K(F)) Review & Benchmarks – Hybrid Top-End Performance

What is “RaptorLake-Refresh”?

It is the “next-generation” (14th gen) Core architecture, refreshing the current “RaptorLake” (13th gen) and thus the 4th generation “hybrid” (aka “big.LITTLE”) arch that Intel has released. As before, it combines big/P(erformant) “Core” cores with LITTLE/E(fficient) “Atom” cores in a single package and covers everything from desktops, laptops, tablets and even (low-end) servers.

  • Desktop (S) (65-125W rated, up to 253W turbo)
    • 8C (aka big/P) + 16c (aka LITTLE/E) / 32T total (14th Gen Core i7-14900K(F))
    • +50% as many LITTLE/E cores as RPL (14th Gen Core i7-14700K(F))

For best performance and efficiency, this does require operating system scheduler changes – in order for threads to be assigned on the appropriate physical core/thread. For compute-heavy/low-latency this means a “big/P” core; for low compute/power-limited this means a “LITTLE/E” core.

In the Windows world, this means “Windows 11” for clients and “Windows Server vNext” (note not the recently released Server 2022 based on 21H2 Windows 10 kernel) for servers. The Windows power plans (e.g. “Balanced“, “High Performance“, etc.) contain additional settings (hidden), e.g. prefer (or require) scheduling on big/P or LITTLE/E cores and so on. But in general, the scheduler is supposed to automatically handle it all based on telemetry from the CPU.

Windows 11 also gets updated QoS (Quality of Service) API (aka functions) allowing app(lications) like Sandra to indicate which threads should use big/P cores and which LITTLE/E cores. Naturally, these means updated applications will be needed for best power efficiency.

Intel Core i7 14700K(F) (RaptorLake-Refresh) 8C + 12c

Intel Core i7 14700K(F) (RaptorLake-Refresh) 8C + 12c

General SoC Details

  • 10nm+++ (Intel 7+) improved process
  • Unified 33MB L3 cache (vs. 30MB on RPL thus 10% larger e.g. 14700K(F))
  • PCIe 5.0 (up to 64GB/s with x16 lanes) – up to x16 lanes PCIe5 + x4 lanes PCIe4
    • NVMe SSDs may thus be limited to PCIe4 or bifurcate main x16 lanes with GPU to PCIe5 x8 + x8
  • PCH up to x12 lanes PCIe4 + x16 lanes PCIe3
    • CPU to PCH DMI 4 x8 link (aka PCIe4 x8)
  • DDR5/LP-DDR5 memory controller support (e.g. 4x 32-bit channels) – up to 5600Mt/s (official, vs. 4800Mt/s ADL)
    • XMP 3.0 (eXtreme Memory Profile(s)) specification for overclocking with 3 profiles and 2 user-writable profiles (!)
  • Thunderbolt 4 (and thus USB 4)

big/P(erformance) “Core” core

  • Up to 8C/16T “Raptor Cove” (!) cores – improved from “Golden Cove” in ADL 😉
  • Disabled AVX512! in order to match Atom cores (on consumer)
    • (Server versions ADL-EX support AVX512 and new extensions like AMX and FP16 data-format)
    • Single FMA-512 unit (though disabled)
  • SMT support still included, 2x threads/core – thus 16 total
  • L1I remains at 32kB
  • L1D remains at 48kB
  • L2 increased to 2MB per core (almost 2x ADL) like server parts (ADL-EX)

LITTLE/E(fficient) “Atom” core

  • Up to 12c/12T “Gracemont” cores – thus 2x more than ADL but same core
  • No SMT support, only 1 thread/core – thus 16 total (in 4x modules of 4x threads)
  • AVX/AVX2 support – first for Atom core, but no AVX512!
    • (Recall that “Phi” GP-GPU accelerator w/AVX512 was based on Atom core)
  • L1I still at 64kB
  • L1D still at 32kB
  • L2 4MB shared by 4 cores (2x larger than ADL)

As with ADL/RPL, RPL-R’s big “Raptor Cove” cores have AVX512 disabled which may prove to be a (big) problem considering AMD’s current Zen4 (Ryzen 7000) range support it.

The changes from ADL to RPL were already minor (gen 12 to gen 13) but this time RPL-R is not even a new/updated RPL stepping – it is the very same silicon better binned for higher speeds. This must be the smallest difference between generations ever – even Gen 6 to 7 (“Skylake” SKL to “Kaby Lake” SKL) or Gen 9 to 10 (“Coffee Lake” CFL to “Comet Lake” CML) had more changes.

The only “interesting” CPU in the line-up is the 14700K(F) that comes with extra Efficient (Atom) cores (12 vs. 8) that is why it is the only CPU tested. Otherwise, all the other CPUs (14900K(F), 14600K(F) are just faster variants and that’s it.

Changes in Sandra to support Hybrid

Like Windows (and other operating systems), we have had to make extensive changes to both detection, thread scheduling and benchmarks to support hybrid/big-LITTLE. Thankfully, this means we are not dependent on Windows support – you can confidently test AlderLake on older operating systems (e.g. Windows 10 or earlier – or Server 2022/2019/2016 or earlier) – although it is probably best to run the very latest operating systems for best overall (outside benchmarking) computing experience.

  • Detection Changes
    • Detect big/P and LITTLE/E cores
    • Detect correct number of cores (and type), modules and threads per core -> topology
    • Detect correct cache sizes (L1D, L1I, L2) depending on core
    • Detect multipliers depending on core
  • Scheduling Changes

    • All Threads (MT/MC)” (thus all cores + all threads – e.g. 32T
      • All Cores (MC aka big+LITTLE) Only” (both core types, no threads) – thus 24T
    • “All Threads big/P Cores Only” (only “Core” cores + their threads) – thus 16T
      • big/P Cores Only” (only “Core” cores) – thus 8T
      • LITTLE/E Cores Only” (only “Atom” cores) – thus 16T
    • Single Thread big/P Core Only” (thus single “Core” core) – thus 1T
    • Single Thread LITTLE/E Core Only” (thus single “Atom” core) – thus 1T
  • Benchmarking Changes
    • Dynamic/Asymmetric workload allocator – based on each thread’s compute power
      • Note some tests/algorithms are not well-suited for this (here P threads will finish and wait for E threads – thus effectively having only E threads). Different ways to test algorithm(s) will be needed.
    • Dynamic/Asymmetric buffer sizes – based on each thread’s L1D caches
      • Memory/Cache buffer testing using different block/buffer sizes for P/E threads
      • Algorithms (e.g. GEMM) using different block sizes for P/E threads
    • Best performance core/thread default selection – based on test type
      • Some tests/algorithms run best just using cores only (SMT threads would just add overhead)
      • Some tests/algorithms (streaming) run best just using big/P cores only (E cores just too slow and waste memory bandwidth)
      • Some tests/algorithms sharing data run best on same type of cores only (either big/P or LITTLE/E) (sharing between different types of cores incurs higher latencies and lower bandwidth)
    • Reporting the Performance Contribution & Ratio of each thread
      • Thus the big/P and LITTLE/E cores contribution for each algorithm can be presented. In effect, this allows better optimisation of algorithms tested, e.g. detecting when either big/P or LITTLE/E cores are not efficiently used (e.g. overloaded)

As per above you can be forgiven that some developers may just restrict their software to use big/Performance threads only and just ignore the LITTLE/Efficient threads at all – at least when using compute heavy algorithms.

For this reason we recommend using the very latest version of Sandra and keep up with updated versions that likely fix bugs, improve performance and stability.

But is it RaptorLake-Refresh, RaptorLake or even AlderLake-Refresh?

Unfortunately, it seems that not all CPUs labelled “13th Gen” will be “RaptorLake” (RPL); some middle-range i5 and low-range i3 models come with “AlderLake” (Refresh) ADL-R cores that is likely to confuse ordinary people into buying these older-gen CPUs.

What is more confusing is that the ID (aka CPUID) of these 13th Gen ADL-R/RPL models is the same (e.g. 0B067x) and does not match the old ADL (e.g. 09067x). However, the L2 cache sizes are the same as old ADL (1.25MB for big/Core and 2MB for LITTLE/Atom cluster) not the larger RPL (2MB for big/Core and 4MB for LITTLE/Atom cluster).

Note: There is still a possibility these are actually RPL cores but with L2 cache(s) reduced (part disabled/fused off) in order not to outperform higher models.

CPU (Core) Performance Benchmarking

In this article we test CPU core performance; please see our other articles on:

Hardware Specifications

We are comparing the Intel with competing desktop architectures as well as competitors (AMD) with a view to upgrading to a top-of-the-range, high performance design.

Specifications Intel Core i7 14700K(F) 8C+12c/28T (RPL-R) Intel Core i7 13700K(F) 8C+8c/24T (RPL) Intel Core i7 12700K(F) 8C+4c/20T (ADL) AMD Ryzen 7 7700X 8C/16T (Zen4) Comments
Arch(itecture) Raptor Cove + Gracemont / RaptorLake Refresh Raptor Cove + Gracemont / RaptorLake Golden Cove + Gracemont / AlderLake Zen4 / Raphael The very latest arch
Modules (CCX) / Cores (CU) / Threads (SP) 8C+12c / 28T 8C+8c / 24T 8C+4c / 20T 8C / 16T 8 more (2x) LITTLE cores!
Rated/Turbo Speed (GHz) 2.5 – 4.3GHz [E] / 3.4 – 5.6GHz [P] [+3.7%] 2.5 – 4.2GHz [E] / 3.4 – 5.4GHz [P] 2.7 – 3.8GHz [E] / 3.6 – 5GHz [P] 4.5 – 5.4GHz Turbo is about 4% higher
Rated/Turbo Power (W)
125 – 253W [PL2] 125 – 253W [PL2] 125 – 190W [PL2] 105 – 142W [PPL] Same ratings
L1D / L1I Caches 8x 48/32kB + 12x 32/64kB
8x 48/32kB + 8x 32/64kB 8x 48/32kB + 4x 32/64kB 8x 32kB 8-way / 8x 32kB 8-way Same L1D/L1I caches
L2 Caches 8x 2MB + 3x 4MB (28MB) [+16%]
8x 2MB + 2x 4MB (24MB) 8x 1.25MB + 2MB (12MB) 8x 1MB 16-way (8MB) L2 is 16% larger
L3 Cache(s) 33MB 16-way [+10%] 30MB 16-way 25MB 16-way 32MB 16-way L3 is 10% larger
Microcode (Firmware) 0B0671-11D 0B0671-10E 090672-1E A20F10-1003 Revisions just keep on coming.
Special Instruction Sets VNNI/256, SHA, VAES/256 VNNI/256, SHA, VAES/256 VNNI/256, SHA, VAES/256 AVX512, VNNI/512, SHA, VAES/512 AVX512 still MIA
SIMD Width / Units
256-bit 256-bit 256-bit 512-bit (as 2x 256-bit) Same SIMD units
Price / RRP (USD)
$409 $409 $449 $399 Same price

Disclaimer

This is an independent review (critical appraisal) that has not been endorsed nor sponsored by any entity (e.g. Intel, etc.). All trademarks acknowledged and used for identification only under fair use.

And please, don’t forget small ISVs like ourselves in these very challenging times. Please buy a copy of Sandra if you find our software useful. Your custom means everything to us!

SiSoftware Official Ranker Scores

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets. “RaptorLake” (RPL) does not support AVX512 – but it does support 256-bit versions of some original AVX512 extensions.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 11 x64 (22H2), latest AMD and Intel drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Native Benchmarks Intel Core i7 14700K(F) 8C+12c/28T (RPL-R) Intel Core i7 13700K(F) 8C+8c/24T (RPL) Intel Core i7 12700K(F) 8C+4c/20T (ADL) AMD Ryzen 7 7700X 8C/16T (Zen4) Comments
CPU Arithmetic Benchmark Native Dhrystone Integer (GIPS) 910 [+6%] 857 629 648 RPL-R is 6% faster
CPU Arithmetic Benchmark Native Dhrystone Long (GIPS) 991 [+14%] 872 668 657 A 64-bit integer workload RPL-R is 14% faster
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 648 [+24%] 522 431 363 With floating-point, RPL-R is 24% faster
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 412 [+1%] 409 345 306 With FP64 RPL-R is only 1% faster.
With non-SIMD code, we see huge performance uplift in both integer (old’ Dhrystone) and floating-point (old’ Whetstone) of 11% over RPL that help push even further than AMD’s current Ryzen. The extra 4 LITTLE/Atom cores do seem to helpa lot

Thus for normal, non-SIMD code – RPL-R will perform much better and provide a great upgrade over RPL/ADL and cement Intel’s domination in some workloads (Cinebench?)…

BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 2,596 [+4%] 2,485 1,993 3,111* RPL-R is 4% faster than old RPL here.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 986 [+13%] 869 707 971* With a 64-bit, RPL-R is 13% faster.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 186 [+13%] 164 137 275** Using 64-bit int to emulate Int128 RPL-R is 13% faster.
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 2,969 [+12%] 2,642 2,189 2,829* In this floating-point vectorised test RPL-R is 12% faster
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 1,527 [+13%] 1,357 1,128 1,541* Switching to FP64 RPL-R is 13% faster
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 72 [+11%] 64.9 51.8 59.9* Using FP64 to mantissa extend FP128 RPL-R is 11% faster
With heavily vectorised SIMD workloads – RPL-R sees similar improvement, it is around 12% faster than old RPL across all tests with minor variations. For older software just using AVX2/FMA3, RPL-R just flies past RPL/ADL as well as older CPU (Zen3, Zen2, etc.)

Unfortunately, AMD’s Zen4 supports AVX512 – which allows it to beat RPL-R in 50% of tests. This shows just how much software can gain from AVX512 even when not executed full width (as Zen4 splits it into 2x 256-bit). Intel will need to find a solution for future arch as more and more software will start supporting AVX512.

Note:* using AVX512 instead of AVX2/FMA.

Note:** using AVX512-IFMA52 to emulate 128-bit integer operations (int128).

BenchCrypt Crypto AES-256 (GB/s) 41 [+4%] 39.1 38.3 31 RPL-R is just 4% faster.
BenchCrypt Crypto AES-128 (GB/s) 39.1 What we saw with AES-256 just repeats with AES-128.
BenchCrypt Crypto SHA2-256 (GB/s) 46 [+22%] 37.98 28.15 34.29 With SHA, RPL-R is 22% faster .
BenchCrypt Crypto SHA1 (GB/s) 33.67 The less compute-intensive SHA1 does not change things due to acceleration.
As streaming tests (crypto/hashing) are memory bound, RPL won’t beat ADL with the same memory speed – it would need much faster DDR5 to feed all the 24 cores!

But with SHA, RPL does manage to beat ADL by a huge 64% and thus even AVX512-enabled AMD Zen4 which is a pretty impressive. The extra Little Atom cores can help here with SIMD integer workloads.

Note***: using VAES 256-bit (AVX2) or 512-bit (AVX512)

Note**: using SHA HWA not SIMD (e.g. AVX512, AVX2, AVX, etc.)

Note*: using AVX512 not AVX2.

BenchFinance Black-Scholes float/FP32 (MOPT/s) The standard financial algorithm.
BenchFinance Black-Scholes double/FP64 (MOPT/s) 675 [+17%] 577 466 481 Switching to FP64 code, RPL-R is 17% faster
BenchFinance Binomial float/FP32 (kOPT/s) Binomial uses thread shared data thus stresses the cache & memory system;
BenchFinance Binomial double/FP64 (kOPT/s) 197 [+9%] 180 138 142 With FP64 code RPL-R is 9% faster.
BenchFinance Monte-Carlo float/FP32 (kOPT/s) Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches
BenchFinance Monte-Carlo double/FP64 (kOPT/s) 271 [+18%] 229 185 203 Here RPL-R is 18% faster.
AMD’s Zen always did well on non-SIMD floating-point algorithms – but here RPL-R shows the times are changing; with 15% improvement over “old” RPL, it has no problem dispatching even the latest Zen4 and all of its improvements.
BenchScience SGEMM (GFLOPS) float/FP32 In this tough vectorised algorithm that is widely used (e.g. AI/ML).
BenchScience DGEMM (GFLOPS) double/FP64 561 [+12%] 501 385 484* RPL-R is the usual 12% faster
BenchScience SFFT (GFLOPS) float/FP32 FFT is also heavily vectorised but stresses the memory sub-system more.
BenchScience DFFT (GFLOPS) double/FP64 29.44 [+12%] 26.17 24.6 21.61* With FP64 code, RPL-R is 12% faster
BenchScience SNBODY (GFLOPS) float/FP32 N-Body simulation is vectorised but fewer memory accesses.
BenchScience DNBODY (GFLOPS) double/FP64 272 [+3%] 263 190 380* With FP64 RPL-R is 3% faster.
We see the usual 12% better performance of RPL-R here – though higher clocked (but decent latency) memory have a larger impact. By now much faster DDR5 memory has become affordable (e.g. 6400Mt/s and higher) – while originally ADL was likely paired with older, slower DDR4 (3200Mt/s typical) that greatly reduced performance.
Note*: using AVX512 not AVX2/FMA3.
CPU Image Processing Blur (3×3) Filter (MPix/s) 7,459 [+14%] 6,524 4,698 7,725* In this vectorised integer RPL-R is 14% faster.
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 2,859 [+13%] 2,521 1,807 3,225* Same algorithm but more shared data 13% faster.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 1,429 [+14%] 1,255 893 1,640* Again same algorithm but even more data shared – 14% faster
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 2,394 [+13%] 2,126 1,554 2,406* Different algorithm RPL-R is 13% faster.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 195 [+8%] 180 129 348* Still vectorised code RPL-R is 8% faster.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 99 [+13%] 88 68 51* This test has always been tough RPL-R is 13% faster.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 6,419 [+1%] 6,333 6,257 4,957* With integer workload, RPL-R is 1% faster.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 1,355 [+10%] 1,233 988 824* In this final test we see RPL-R is 10% faster
These tests love SIMD vectorised compute, thus here RPL-R is again 12% faster than RPL and this even allows it to beat the AVX512-enabled Zen4 in more than 50% of tests.

The test also showed how much Zen4 benefits from AVX512 and in effect how much RPL-R misses by not having AVX512 enabled. With AMD on board, AVX512 adoption is likely to increase, thus Intel had better bring support to Atom somehow, soon…

Note*: using AVX512 not AVX2/FMA3.

Intel RaptorLake-Refresh 14700K(F) (8C + 12c) Inter-Thread/Core HeatMap Latency (ns)

Intel RaptorLake-Refresh 14700K(F) (8C + 12c) Inter-Thread/Core HeatMap Latency (ns)

The inter-thread/core/module latencies “heat-map” shows how the latencies vary when transferring data off-thread (same L1D), off-core (same L3 for big Cores but same L2 for Little Atom cores) and different-core-type (big Core to Little Atom).

Still, judicious thread-pair scheduling is needed to keep latencies low (and conversely bandwidth high when large data is transferred.

CPU Multi-Core Benchmark Total Inter-Thread Bandwidth – Best Pairing (GB/s) 156[-6%] 166 121 132* 6% less bandwidth than RPL
Despite the extra E/Atom cluster, inter-core bandwidth transfers are slightly below RPL in our tests.
Note:* using AVX512 512-bit wide transfers.
CPU Multi-Core Benchmark Average Inter-Thread Latency (ns) 39.4 [+11%] 35.5 37.8 15.1 Overall latencies are 11% higher.
CPU Multi-Core Benchmark Inter-Thread Latency (Same Core) Latency (ns) 11.6 [+1%] 11.5 11.6 7.7 Inter-Thread (big Core) latency is 1% higher.
CPU Multi-Core Benchmark Inter-Core Latency (big Core, same Module) Latency (ns) 34.3 [3%] 33.4 35.3 15.6 We see Inter-big-Core latency 3% higher.
CPU Multi-Core Benchmark Inter-Core (Little Core, same Module) Latency (ns) 38.3 [+2%] 37.6 35.3 We see Inter-Little-Atom latency 2% higher.
CPU Multi-Core Benchmark Inter-Core (Big 2 Little) Latency (ns) 41.9 [+1%] 41.5 44 We see Inter-Big-2-Little latency 1% higher.
Due to the increased number of Little Atom cores (12 vs. 8), the overall latency of RPL-R is naturally going to be higher than RPL.

Otherwise, all latencies are within margins, as expected.

Aggregate Score (Points) 22,780 [+12%] 20,250 16,650 17,280* Across all benchmarks, RPL-R is 12% faster than RPL!
Across all the benchmarks – RPL-R ends up a good 12% faster over “old” RPL which allows it to increase its wins over the competition. As Sandra’s benchmark scores get significant uplift from AVX512, this means RPL is at a significant disadvantage versus Zen4.

Despite its split (2x) 256-bit AVX512 implementation, we’ve seen Zen4 get significant uplift from AVX512 – and here Intel needs to implement a solution for future arch(itecture)s (“MeterorLake” MTL?) as more and more software will add AVX512 support now that AMD is on board.

Note*: using AVX512 instead of AVX2/FMA3.

Price/RRP (USD) $409 [=]
$409 $449 $399 Same price
Price Efficiency (Perf. vs. Cost) (Points/USD) 55.7 [+12%] 49.51 37.08 43.31 Overall 12% more performance for the price
With the price almost the same, the bang-for-buck has also increased by the same amount +12%, that makes RPL-R much better value than RPL, ADL or even Zen4. For Intel’s platform, it is the best value
Power/TDP – Turbo (W) 125 – 253W [PL2] [=] 125 – 253W [PL2] 125 – 190W [PL2] 105 – 142W [PPL] Same TDP/turbo
Power Efficiency (Perf. vs. Power) (Points/W) 90.04 [+12%] 80.04 87.63 121.69 RPL-R is 12% more efficient than RPL.
With TDP/turbo unchanged, RPL is the same 12% more power efficient than RPL and finally more efficient than ADL (due to its lower TDP/turbo) but still a way to go to beat Zen4. Intel still has some work to do here.

Final Thoughts / Conclusions

Summary: A Better Value High(er)-End: 7/10

The changes from ADL to RPL were already minor (12th to 13th) but this time RPL-R is not even a new/updated RPL stepping – it is the very same silicon better binned for higher speeds. This must be the smallest difference between generations ever !

On desktop, Intel has been stuck with a maximum of 8 big/P(erformance) Cores on high-end since 10th Gen (“Comet Lake”) while their competitor (AMD) has gone all the way up to 16.  Intel had no choice but to increase the number of LITTLE/E(fficient) Atom cores to improve performance from generation to generation (12th to 13th and now 14th Gen). The “value” high-end i7 models have at least gained LITTLE/E(fficient) cores consistently, from 4 to 8 (2x) and now to 12 (3x) with each generation getting that much closer to the i9 to leave the latter for bragging rights only (32-threads, count them! 😉

Thus RPL-R with higher clocks and 50% more LITTLE/E Atom cores ends up about 12% faster than RPL that allows it to win more benchmarks against AMD’s AVX512-enabled Zen4. For the same price and TDP that is not bad, but hardly revolutionary or even evolutionary. Considering it is the same stepping, with no other improvements or features whatsoever, it is somewhat disappointing.

But a gain (for the same money and power) is a gain and hopefully this also drives down the prices of “old” RPL (13th gen) and even older ADL (12th gen) that would represent better value if discounted by 15% or more. With Black Friday round the corner, there may be good deals to be had.

Summary: A Better Value High(er)-End: 7/10

Further Articles

Please see our other articles on:

Disclaimer

This is an independent review (critical appraisal) that has not been endorsed nor sponsored by any entity (e.g. Intel, etc.). All trademarks acknowledged and used for identification only under fair use.

And please, don’t forget small ISVs like ourselves in these very challenging times. Please buy a copy of Sandra if you find our software useful. Your custom means everything to us!

Tagged , , , , , , , , , . Bookmark the permalink.

Comments are closed.