Page 550 of 999 FirstFirst ... 50450500525540545546547548549550551552553554555560575600650 ... LastLast
Results 5,491 to 5,500 of 9981

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5491

    Anandtech: The Corsair H80i GT and H100i GTX AIO Coolers Review

    Today we are having a look at the upgraded 120 mm AIO liquid coolers from Corsair, the single slot H80i GT and the dual slot H100i GTX. Both come with two high pressure 120 mm fans, are based on the same core design and feature RGB lighting and Corsair Link support. In this review we also examine their performance, especially in relation to the vanilla H80i and H100i versions.

    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5492

    Anandtech: AMD @ SC15: Boltzmann Initiative Announced - C++ and CUDA Compilers for AM

    The second in our major SC15 announcements comes from AMD, who is taking to the show to focus on the HPC capabilities of their FirePro S line of server cards. Of all of the pre-briefings we’ve sat in on in the past two weeks AMD’s announcement today is by far the most significant. And it’s only fitting then that this happens when SC is taking place in AMD’s backyard: Austin, Texas.
    So what has AMD so excited for SC15? In short the company is about to embark on a massive overhaul of their HPC software plans. Dubbed the Boltzmann Initiative – after father of statistical mechanics Ludwig Boltzmann – AMD will be undertaking a much needed redevelopment effort of their HPC software ecosystem in order to close the gap with NVIDIA and offer an environment competitive (and compatible!) with CUDA. So with that in mind, let’s jump right in.
    Headless Linux & HSA-based GPU Environment

    Perhaps the cornerstone of the Boltzmann Initiative is with AMD’s drivers, which are being improved and overhauled to support AMD’s other plans. The company will be building a dedicated 64-bit Linux driver specifically for headless operation under Linux. It’s only been in the last year that AMD has really focused on headless Linux operation – prior to that headless OpenCL execution was a bit of a hack – and with the new driver AMD completes what they’ve started.
    But more importantly than that, the headless Linux driver will be implementing an HSA extended environment, which will bring with it many of the advantages of the Heterogeneous System Architecture to AMD’s FirePro discrete GPUs. This environment, which AMD is calling HSA+, builds off of the formal HSA standard by adding extensions specifically to support HSA with discrete GPUs. The extensions themselves are going to be non-standard – the HSA Foundation has been focused on truly integrated devices ala APUs, and it doesn’t sound like these extensions will be accepted upstream into mainstream HSA any time soon – but AMD will be releasing the extensions as an open source project in the future.
    The purpose of extending HSA to dGPUs, besides meeting earlier promises, is to bring as many of the benefits of the HSA execution model to dGPUs as is practical. For AMD this means being able to put HSA CPUs and dGPUs into a single unified address space – closing a gap with NVIDIA since CUDA 6 – which can significantly simplify programming for applications which are actively executing work on both the CPU and the GPU. Using the HSA model along with this driver also allows AMD to address other needs such as bringing down dispatch latency and improving support/performance for large clusters where fabrics such as InfiniBand are being used to link together the nodes in a cluster. Combined with the basic abilities of the new driver, AMD is in essence laying some much-needed groundwork to offer a cluster feature set more on-par with the competition.
    Heterogeneous Compute Compiler – Diverging From OpenCL, Going C++

    The second part of the Boltzmann Imitative is AMD’s new compiler for HPC, the Heterogeneous Compute Compiler. Built on top of work the company has already done for their HSA compiler, the HCC will be the first of AMD’s two efforts to address the programming needs of the HPC user base, who by and large has passed on AMD’s GPUs in part for a lackluster HPC software environment.
    As a bit of background here before going any further, one of the earliest advantages for NVIDIA and CUDA was supporting C++ and other high-level programming languages at a time when OpenCL could only support a C-like syntax, and programming for OpenCL was decidedly at a lower level. AMD meanwhile continued to back OpenCL, in part in order to support an open ecosystem, and while OpenCL made great strides with the provisional release of OpenCL 2.1 and OpenCL C++ kernel language this year, in a sense the damage has been done. OpenCL sees minimal use in the HPC space, and further complicating matters is the fact that not all of the major vendors support OpenCL 2.x. AMD for their part is polite enough not to name names, but at this point the laggard is well known to be NVIDIA, who only supports up to OpenCL 1.2 (and seems to be in no rush to support anything newer)
    As a result of these developments AMD is altering their software strategy, as it’s clear that the company can no longer just bank on OpenCL for their HPC software API needs. I hesitate to say that AMD is backing away from OpenCL at all, as in our briefings AMD made it clear that they intend to continue to support OpenCL, and based on their attitude and presentation this doesn’t appear to be a hollow corporate boilerplate promise in order to avoid rocking the boat. But there’s a realization that even if OpenCL delivers everything AMD ever wanted, it’s hard to leverage OpenCL when support for the API is fragmented and when aspects of OpenCL C++ are still too low level, so AMD will simultaneously be working on their own API and environment.
    This environment will be built around the Heterogeneous Compute Compiler. In some ways AMD’s answer to CUDA, the HCC is a single C/C++/OpenMP compiler for both the CPU and the GPU. Like so many recent compiler projects, AMD will be leveraging parts of Clang and LLVM to handle the compilation, along with portions of HSA as previously described to serve as the runtime environment.
    The purpose of the HCC will be to allow developers to write CPU and/or GPU code using a single compiler, in a single language, inside a single source file. The end result is something that resembles Microsoft’s C++ AMP, with developers simply making parallel calls within a C++ program as they see fit. Perhaps most importantly for AMD and their prospective HPC audience, HCC means that a separate source file for GPU kernels is not needed, a limitation that continues to exist right up to OpenCL++.

    An Example of HCC Code (Source)
    Overall HCC will expose parallelism in two ways. The first of which is through explicit syntax for parallel operations, ala-C++ AMP, with developers calling parallel-capable functions such as parallel_for_each to explicitly setup segments of code that can be run in parallel and how that interacts with the rest of the program, with this functionality built around C++ lambda code. The second method, at an even higher level, will be to leverage the forthcoming Parallel STL (Standard Template Library), which is slated to come with C++ 17. The Parallel STIL will contain a number of parallelized standard functions for GPU/accelerator execution, making things even simpler for developers as they no longer need to explicitly account for and control certain aspects of parallel execution, and can use the STL functions as a base for modification/extension.
    Ultimately HCC is intended to modernize GPU programming for AMD GPUs and to bring some much-desired features to the environment. Along with the immediate addition of basic parallelism and standard parallel functions, the HCC will also include some other features specifically for improving performance on GPUs and other accelerators. This includes support for pre-fetching data, asynchronous compute kernels, and even scratchpad memories (i.e. the AMD LDS Local Data Share). Between these features, AMD is hopeful that they can offer the kind of programming environment that HPC users have wanted, an environment that is more welcoming to new HPC programmers, and an environment that is more welcoming to seasoned CUDA programmers as well.
    Heterogeneous-compute Interface for Portability – CUDA Compilation For AMD GPUs

    Last but certainly not least in the Boltzmann Initiative is AMD’s effort to fully extend a bridge into the world of CUDA developers. With HCC to bring AMD’s programming environment more on par with what CUDA developers expect, AMD realizes that just being as good as NVIDIA won’t always be good enough, that developers accustomed to the syntax of CUDA won’t want to change, and that CUDA won’t be going anywhere anytime soon. The solution to that problem is the Heterogeneous-compute Interface for Portability, otherwise known as HIP, which gives CUDA developers the tools they need to easily move over to AMD GPUs.
    Through HIP AMD will bridge the gap between HCC and CUDA by giving developers a CUDA-like syntax – the various HIP API commands – allowing developers to program for AMD GPUs in a CUDA-like fashion. Meanwhile HIP will also including a toolset (the HIPify Tools) that further simplifies porting by automatically converting CUDA code to HIP code. And finally, once code is HIP – be it natively written that way or converted – it can then be compiled to either NVIDIA or AMD GPUs through NVCC (using a HIP header file to add HIP support) or HCC respectively.
    To be clear here, HIP is not a means for AMD GPUs to run compiled CUDA programs. CUDA is and remains an NVIDIA technology. But HIP is the means for source-to-source translation, so that developers will have a far easier time targeting AMD GPUs. Given that the HPC market one where developers are typically writing all of their own code here anyhow and tweaking it for the specific architecture it’s meant to run on, a source-to-source compiler covers most of AMD’s needs right there, and retains AMD’s ability to compile CUDA code from a high level where they can better optimize that code for their GPUs.
    Now there are some unknowns here, including whether AMD can keep HIP up to date with CUDA feature additions, but more importantly there’s a question of just what NVIDIA’s reaction will be. CUDA is NVIDIA’s, through and through, and it does make one wonder whether NVIDIA would try to sue AMD for implementing the CUDA API without NVIDIA’s permission, particularly in light of the latest developments in the Oracle vs. Google case on the Java API. AMD for their part has had their legal team look at the issue extensively and doesn’t believe they’re at risk – pointing in part to Google’s own efforts to bring CUDA support to LLVM with GPUCC – though I suspect AMD’s efforts are a bit more inflammatory given the direct competition. Ultimately it’s a matter that will be handled by AMD and NVIDIA only if it comes to it, but it’s something that does need to be pointed out.
    Otherwise by creating HIP AMD is solving one of the biggest issues that has hindered the company’s HPC efforts since CUDA gained traction, which is the fact that they can’t run CUDA. A compatibility layer against a proprietary API is never the perfect solution – AMD would certainly be happier if everyone could and did program in standard C++ – but there is a sizable user base that has grown up on CUDA and is at this point entrenched with it. And simply put AMD needs to have CUDA compatibility if they wish to wrest HPC GPU market share away from NVIDIA.
    Wrapping things up then, with the Boltzmann Imitative AMD is taking an important and very much necessary step to redefine themselves in the HPC space. By providing an improved driver layer for Linux supporting headless operation and a unified memory space, with a compiler for direct, single source C++ compilation on top of that, and a CUDA compatibility layer to reach the established CUDA user base, AMD Is finally getting far more aggressive on the HPC side of matters, and making the moves that many have argued they have needed to make for quite some time. At this point AMD needs to deliver on their roadmap and to ensure they deliver quality tools in the process, and even then NVIDIA earned their place in the HPC space through good products and will not be easily dislodged – CUDA came at exactly the time when developers needed it – but for AMD if they can execute on Boltzmann it will be the first time in half a decade they would have a fighting chance at tapping into the lucrative and profitable HPC market.
    Gallery: AMD Boltzmann Initiative Press Deck





    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5493

    Anandtech: Netgear Expands Smart Home Lineup with Arlo Q IP Camera

    Netgear's push into the consumer IP camera / smart home space began with the Arlo Wire-Free cameras launched last year. The first generation Arlo cameras aimed at addressing the major pain points associated with consumer IP cameras - location restriction close to a power source, measures needed for protection from the elements in outdoor locations, and the need to run an Ethernet cable for connecting to the network. By being weatherproof, battery-powered, and transmitting video over Wi-Fi to a base station only when motion was detected, it delivered enormous flexibility to consumers.
    Netgear had an interesting slide to share with respect to the market share status of the Arlo.
    Within 6 months of introduction, the Arlo managed to climb up to the top of the charts with a 20% share (also helped by Dropcam making a slow transition to the new Nest Cam model). This shows that there is a tremendous market potential for battery-operated IP cameras (something we had not foreseen when the Arlo Wire-Free was launched).
    IP cameras such as Nest Cam (Dropcam) and D-Link's myriad offerings also enjoy success in the market because they address use-cases for which the Arlo Wire-Free isn't suitable:

    • Continuous recording for long events of interest
    • Ability to have 2-way live communication via built-in speakers / microphones
    • Higher resolution for better details in captured video
    • Analytics capabilities - intelligent detection and ability to set up activity zones

    Netgear is introducing the Arlo Q today to address some of these issues. It is intended to be complementary to the Arlo Wire-Free model that will be continue to be sold concurrently.
    The hardware aspects of the Arlo Q are summarized below:

    • Up to 1080p30 video recording (depending on bandwidth availability)
    • 4MP sensor resolution
    • 15' night vision with 850nm IR
    • Dual-band N600 Wi-Fi
    • Integrated speaker and microphone for 2-way audio
    • 130 degree field-of-view

    Other aspects include:

    • Detection for recording using activity zones and/or audio triggers
    • 7 days of free cloud video recording for the lifetime of the device
    • Mobile notifications
    • iOS and Android apps as well as web browser support for video stream viewing
    • Provision for flexible placement (magnetic mounting / wall-mounting brackets / desktop placement)
    • Provision for scheduling recordings (armed / disarmed for motion / audio triggers, scheduling as well as geofencing capabilities)

    Compared to the Nest Cam (which supports only live viewing of the video stream for free, but requires a subscription for cloud recording), Netgear provides seven days of rolling cloud recordings for free (based on motion or audio events). Continuous Video Recording (CVR) is available either on a 14-day or 30-day rolling window plan with the service rates varying depending on the duration as well as the number of cameras in the plan. Note that a subscription is not an essential requirement to take good advantage of most of the capabilities of the Arlo Q.
    We have generally been wary of recommending smart home products that rely purely on the cloud for operation. While the Arlo lineup is currently heavily dependent on the cloud (just like the Nest Cam), Netgear did assure us that they had no qualms in enabling local recording of the video stream (either to storage attached to the Arlo hub or a ReadyNAS unit in the same local network) sometime in the future. Apparently, the main concern is the security of the video stream. Given the 7-day free recording feature, it is clear that Netgear is not relying entirely on the CVR feature to drive the revenue up for the product (which is good for the consumers).
    Netgear also indicated that the Arlo Q and the current Nest Cam both use SoCs from the same vendor, with the one used in the the Arlo Q being the next-generation version of the one in the Nest Cam. Given this information, it is likely that the video quality as well as the streaming bitrate will be similar to (or better than) the one from the Nest Cam. It is great that the market will get a more consumer-friendly alternative to the Nest Cam, though we would be more enthusiastic if Netgear had the local recording capabilities ready at launch time. The Arlo Q (VMC3040) will be available in December 2015 and have a MSRP of $220.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5494

    Anandtech: NVIDIA @ SC15: US NOAA to Build Tesla Weather Research Cluster

    Continuing with our coverage of today’s spate of SC15 announcements, we have NVIDIA. Having already launched their Tesla M40 and M4 server cards last week to get ahead of SC15 news and leaks, the company is at the show this week showing off their latest Tesla products. NVIDIA needs no real introduction at this point, and these days their presence at SC15 is more about convincing specific customers/developers about the practicality of using GPUs and other massively parallel accelerators for their specific needs, as at this point the use of GPUs and other accelerators in the Top500 supercomputers continues to grow.
    Along with touting the number of major HPC applications that are now GPU accelerated and the performance impact of that process, NVIDIA’s other major focus at SC15 is to announce their next US government contract win. This time the National Oceanic and Atmospheric Administration (NOAA) is tapping NVIDIA to build a next-gen research cluster. The system, which doesn’t currently have a name, is on a smaller scale than the likes of Summit & Sierra, and will be comprised of 760 GPUs. The cluster will be operational next year, and giving the timing and the wording as a “next-generation” cluster, it’s reasonable to assume that this will be Pascal powered like Summit and Sierra.
    The purpose of the NOAA cluster will be to develop a higher resolution and ultimately more accurate global forecast model. To throw some weather geekery on top of some technology geekery, in recent years the accuracy of the NOAA’s principle global forecast model, the GFS, has fallen behind the accuracy of other competing models such as the European ECMWF. The most famous case of this difference in accuracy is in 2012, when the GFS initially failed to predict that Hurricane Sandy would hit the US, something the ECMWF correctly predicted. As a result there has been a renewed drive towards improving the US models and catching up with the ECMWF, which in turn is what the NOAA’s research cluster will be used to develop.
    Weather forecasting has in turn been a focus of GPU HPC work for a couple of years now – NVIDIA already has Tesla wins for supercomputers that are being used for weather research – but this is the first NOAA contact for the company. Somewhat fittingly, this comes as the NOAA’s Geophysical Fluid Dynamics Laboratory already runs their simulations out of Oak Ridge, home of course to Titan.
    Gallery: NVIDIA SC15 Press Deck




    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5495

    Anandtech: ASUS Launches Maximus VIII Extreme/Assembly for Skylake, includes 10G Ethe

    We covered the launch of ASUS’ most expensive Z170 motherboard when the announcement was made (we also have it in for review), but late last week another announcement landed on my radar – a second version of the Maximus VIII Extreme, this time using a ‘Plasma Copper’ color scheme requested by system builders but also linking in to the DirectCU graphics card line and the new color scheme for their latest Matrix GTX 980 Ti. The interesting part from this launch comes in the bundle, which includes a front panel audio DAC (connecting into a USB header) but also a 10G Ethernet card.
    10G Ethernet, or at least 10GBase-T, the standard which uses the common RJ-45 Ethernet cables that run through most enthusiast homes, workplaces and hotels, has struggled to get into mainstream computing for various reasons. We saw the ASRock X99 WS-E/10G motherboard be introduced late last year with an onboard Intel X540-T2 controller, but the motherboard was expensive as a result and it was only even an integrated part on the high-end desktop. For everyone else, using a PCIe card with 10GBase-T connectors was the only way to do so, but these were again quite expensive (I spent $760 on two a few months back) and ran warm. Most of these cards are OEM only, although the generic retailers can sometimes stock them.
    The 10G card ASUS is bundling with the new variant of the Maximus VIII Extreme is one based on Aquantia and Tehuti Networks. Don’t worry, I hadn’t heard of them before either – Ganesh did though with one of them. The card is a PCIe 2.0 x4 card that will be verified in all the PCIe x4 and up slots on the motherboard (including from the chipset, so it won’t take up graphics lanes), but is also capable of 2.5G and 5G speeds should routers for those ever become available. The package also supports standard gigabit Ethernet, with the card being a larger version of Tahuti Network’s own that we found here, and with ROG branding.
    There is an argument that 10GBase-T isn’t really a home networking type of arrangement – the switches still cost a minimum of $750 for Netgear’s XS708E. There’s also no point having one system with 10G if no others on your network do either. The counter position is that this motherboard package is north of $500 anyway, so it will only be purchased by enthusiasts or prosumers (or even small/medium businesses that already own a 10G capable Xeon-D NAS or backbone, although they might only just want the card). We are told that there are no plans to sell the card individually either at this point, with the bundle coming to the US at least sometime in the next few weeks.
    Networking aside, the Maximus VIII Extreme is still the same board with U.2 capability, 3T3R 802.11ac wireless, USB 3.1 via both the Intel Alpine Ridge controller as well as ASMedia USB 3.1 controllers, enhanced audio via the SupremeFX brand and overclocking functionality. The SupremeFX Hi-Fi bundled front panel is the additional USB style DAC front panel box, featuring a headphone amp based on an ESS ES9018K2M DAC, dual TI op-amps and an output of over 6V RMS to high impedance headphones.
    We have a review inbound for the Maximus VIII Extreme (it’s pretty much tested, need to finish up then write), but it would be interesting to see how the 10G card performs compared to other solutions. No pricing as of yet until the US specific press release hits.
    Gallery: ASUS Maximus VIII Extreme-Assembly


    Source: ASUS



    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5496

    Anandtech: Intel @ SC15: Launching Xeon Phi “Knights Landing” & Omni-Path Architectur

    The fourth and final of the major SC15 conference announcements/briefings for today comes Intel. As Intel is in the middle of executing on their previously announced roadmap, they aren’t at SC15 with any announcements of new products this year. However after almost two years of build-up since its initial announcement, Intel’s second-generation Xeon Phi, Knights Landing, is finally gearing up for its full launch.
    The 14nm successor to Knights Corner (1st gen Xeon Phi), Knights Landing implements AVX-512, Multi-Channel DRAM (MCDRAM), and a new CPU core based on Intel’s Silvermont architecture. Knights Landing is now shipping to Intel’s first customers and developers as part of their early ship program, and pre-production systems for demonstrating supercomputer designs are up and running. Knights Landing is ultimately ramping up for general availability in Q1 of 2016, at which point I expect we’ll also get the final SKU specifications from Intel.
    Meanwhile Knights Landing’s partner in processing, Intel’s Omni-Path Architecture, is formally launching at SC15. Intel’s own take on a high bandwidth low-latency interconnect for HPC, Omni-Path marks Intel’s greatest efforts yet to diverge from InfiniBand and go their own way in the market for interconnect fabrics. We covered Omni-Path a bit earlier this year at Intel’s IDF15 conference, so there aren’t any new technical details to touch upon, however Intel is now throwing out their official performance figures for Omni-Path versus InfiniBand EDR, including the power savings of their larger 48-port switch capabilities.
    Ultimately Knights Landing and Omni-Patch Architecture are part of Intel’s larger efforts to build a whole ecosystem, which they’ve been calling the System Scalable Framework. Along with the aforementioned hardware, Intel will be showing off some of the latest software developments for the SSF on the SC15 show floor this week.
    Gallery: Intel SC15 Press Deck





    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5497

    Anandtech: A Few Notes on Intel’s Knights Landing and MCDRAM Modes from SC15

    When learning about new hardware, there are always different angles to look at it. For the most part, manufacturers talking to the media will focus on the hardware aspects of the new shiny thing, about what it can do at a high level then go into a low level silicon architecture detail (sometimes). Anything on the software side comes through talks about how to write for the new shiny thing – this is why Intel has conferences such as IDF (Intel Developer Forum) to help explain how to use it. Interestingly there can always be information about the true operating nature of the device in a software talk, as opposed to a hardware talk.
    As such, I attended a tutorial session here at SuperComputing15 on the MCDRAM (Multi-Channel DRAM) used in Intel’s 2nd generation Xeon Phi, code named Knights Landing (KNL). Specifically the talk focused on the analysis methods and tools, and it went into greater depth as to how the implementation works.
    Some of the KNL overview we have seen before – the 72 Silvermont-based cores running at ~1.3 GHz are split into tiles with two cores per tile, two VPUs (Vector Processing Units, AVX-512) per core and each tile shares 1MB of L2 cache for a total of 36MB of L2 across the design. Rather than the ring topology we see in the standard processor designs from Intel, they are arranged in a mesh topology using an interconnected fabric (which seems to be sets of rings anyway). Despite the 6x7 nature of the image above, shots of the package have had some question that the layout is more akin to a 4x9, although this is unconfirmed.
    The big paradigm shifts are everywhere. KNL can be used as the main processor in a computer, running an OS on top of all the cores, or as a co-processor similar to former Xeon Phi silicon – but as noted above in the slide there is no version of QPI for 2P/4P systems. There are a total of 36 PCI 3.0 lanes though, for PCIe co-processors, as well as onboard Omni-Path control for network interconnects. The cores are 14nm versions of Silvermont, rather than 22nm P54C, with claims that the out-of-order performance is vastly improved. The die has a total of 10 memory controllers – two for DDR4 controllers (supporting three channels each), and then eight for MCDRAM.
    Each of these high-bandwidth controllers link out to a the on-package MCDRAM (we believe stacked 20nm Micron Planar DRAM) through an on-package interposer, offering 400+ GB/s of bandwidth when all the memory is used in parallel. This sounds similar to AMD’s Fiji platform, which offers 4GB of memory over four HBM (high bandwidth memory) packages, but Intel is prepared to offer 16GB of MCDRAM ‘at launch’. The fact that Intel says at launch could be suggested that there are plans to move into higher capacities in the future.
    As the diagram stands, the MCDRAM and the regular DDR4 (up to six channels of 386GB of DDR4-2400) are wholly separate, indicating a bi-memory model. This stands at the heart at which developers will have to contend with, should they wish to extract performance from the part.
    The KNL memory can work in three modes, which are determined by the BIOS at POST time and thus require a reboot to switch between them.
    The first mode is a cache mode, where nothing is needed to be changed in the code. The OS will organize the data to use the MCDRAM first similar to an L3 cache, then the DDR4 as another level of memory. Intel was coy onto the nature of the cache (victim cache, writeback, cache coherency), but as it is used by default it might offer some performance benefit up to 16GB data sizes. The downside here is when the MCDRAM experiences a cache miss – because of the memory controllers the cache miss has to travel back into the die and then go search out into DDR for the relevant memory. This means that an MCDRAM cache miss is more expensive than a simple read out to DDR.
    The second mode is ‘Flat Mode’, allowing the MCDRAM to have a physical addressable space which allows the programmer to migrate data structures in and out of the MCDRAM. This can be useful to keep large structures in DDR4 and smaller structures in MCDRAM. We were told that this mode can also be simulated by developers who do not have hardware in hand yet in a dual CPU Xeon system if each CPU is classified as a NUMA node, and Node 0 is pure CPU and Node 1 is for memory only. The downside of the flat mode means that the developer has to maintain and keep track of what data goes where, increasing software design and maintenance costs.
    The final mode is a hybrid mode, giving a mix of the two.
    In flat mode, there are separate ways to access the high performance memory – either as a pure NUMA node (only applicable if the whole program can fit in MCDRAM), using direct OS system calls (not recommended) or through the Memkind libraries which implements a series of library calls. There is also an interposer library over Memkind available called AutoHBW which simplifies some of the commands at the expense of fine control. Under Memkind/AutoHBW, data structures aimed at MCDRAM have their own commands in order to be generated in MCDRAM.
    Intel’s VTune utilities will be enabled with KNL from VTune Amplifier XE2016.
    There was some talk regarding Intel’s upcoming 3D XPoint which offers another layer of memory but this time in a non-volatile format. We were told to expect 3D XPoint to become a part of future Xeon Phi designs, along with multi-level memory management (potentially three: MCDRAM, DDR, XPoint), although the exact nature of how many levels of memory, or what types and how to use them, is still undecided. What we do know about the future is that the 3rd generation of Xeon Phi will be built on 10nm and named Knights Hill, featuring 2nd generation Omni-Path host fabric.
    Source: Intel, SC15


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5498

    Anandtech: AMD Releases Catalyst 15.11.1 Beta Drivers

    With AMD continuing to deliver beta driver updates left and right lately, today they come to us with another update. Another one of AMD’s point driver updates, Catalyst 15.11.1 primarily brings performance updates to some of the headlining tiles of the season, and ups the Display Driver Version to 15.201.1151.1010.
    Overall this driver is a very straightforward performance driver, with AMD pushing out a batch of performance optimizations for Star Wars: Battlefront, Fallout 4, Assassin's Creed Syndicate, and Call of Duty: Black Ops III. Otherwise there are no bug fixes listed, though AMD does list some known issues, including that Assassin's Creed Syndicate and Star Wars: Battlefront cannot launch in full screen mode on some laptops with an Intel CPU and an AMD GPU.
    Meanwhile, it’s worth noting that this is likely one of the last Catalyst driver releases we’ll see from AMD. Earlier this month AMD announced their new Crimson driver branding and overhaul of their control center, and while AMD has not announced a specific launch date yet, we do know it’s expected before the end of the year, only a short 6 weeks away.
    Anyhow, as always those interested in reading more or installing the updated beta drivers for AMD's desktop, mobile, and integrated GPUs can find them on AMD's Catalyst beta download page.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5499

    Anandtech: Best Android Phones: Holiday 2015

    As we hit the middle of November, the holiday shopping season is starting up. As we have for the past several years, this year we are putting together a series of holiday guides with recommendations for various product categories and some quick links to those products. These holiday guides also act as a way for us to look over all the devices that have been released in a given year to see which still hold up.
    We'll be starting things off this year with smartphones. Smartphones are an enormous market, and the average phone lifetime still being only 18-24 months, many gifts given this holiday season are going to be smartphones. So let's take a look at what we believe to be the best Android phones that you can buy this holiday season.
    Best Android Phablet: Samsung Galaxy Note5

    Buy Samsung Galaxy Note5 (32GB, Black) on Amazon.com
    The term phablet is a bit silly in my opinion, but it has become a fairly common term to describe smartphones with very large profiles. The definition of a phablet is not exactly concrete, and it mainly has to do with a device's chassis size. For example, the Nexus 6 and Galaxy Note5 are clearly phablets, and it's fairly safe to say that the iPhone 6s Plus is one too. However, I don't know if I would describe the LG G4 as a phablet. It has the same screen size as the iPhone 6S Plus, but the use of on-screen buttons and smaller overall chassis size mean that it ends up straddling the line between your standard smartphone and a phablet. When looking at which devices are available in many regions, I think it's pretty clear which phablet offers the best value at the absolute high end, and which offers the best value for someone who is looking to spend less than what they would on a typical flagship.
    I don't think it would be wrong to say that Samsung really pioneered the phablet category. The original Galaxy Note was laughed at by many, but as time has gone on Samsung has improved on it, and now every vendor offers a similarly sized device. With that in mind, it shouldn't come as a surprise that the Galaxy Note5 is my recommendation for a high end phablet. It comes with everything that makes the Galaxy S6 a great phone, but in a larger size and with some additional improvements. Just as an overview, you're getting a 5.7" 2560x1440 AMOLED display, Samsung's Exynos 7420 SoC, 4GB of LPDDR4 RAM, and 32, 64, or 128GB of internal NAND. Some differences from the Galaxy S6 apart from simply being larger include improved camera image processing, making it a serious contender for the title of best smartphone camera, and the inclusion of Samsung's S-Pen for navigation and drawing.
    The Galaxy Note 5 costs $699 for the 32GB version in the US. There are often deals that can help bring the price down a bit, such as a recent $50 off offer from T-Mobile. The 64GB model bumps the price to $779. It's worth noting the prices for the Galaxy S6 Edge+ as well, which is to the Galaxy Note5 what the Galaxy S6 Edge is to the standard Galaxy S6. It starts at $779 for 32GB, and $859 for 64GB. I personally think the edge design looks cool, but there's definitely a trade off in terms of ergonomics, and I don't think it's worth the additional cost unless you really want to own Samsung's absolute highest end phone.
    For buyers who aren't fans of the Galaxy Note5, or who are looking for something that isn't quite as expensive, the Nexus 6P is definitely worth considering. Like the Galaxy Note5 it has a 5.7" 2560x1440 AMOLED display, but inside you get Qualcomm's Snapdragon 810 paired with 3GB of LPDDR4 RAM and 32GB of NAND.
    Some highlights of the Nexus 6P are the camera and the chassis. While we haven't published our Nexus 6P review yet, it uses the same sensor and lens arrangement as the Nexus 5X which I felt has one of the best cameras of any smartphone. The aluminum chassis of the 6P may also be more appealing than the metal/glass design of the Note5, although I didn't feel that the design and ergonomics were at the same level as devices like Huawei's own Mate S or the iPhone 6s Plus.
    Of course, the biggest appeal of the Nexus 6P is its price. At $499 for 32GB, it undercuts most flagship phablets by $200 or so, while being competitive in many other respects. You definitely lose out on the performance of Samsung's Exynos 7420 SoC, but there are obviously tradeoffs that are made when targeting a lower price. The promise of software updates along with a great camera, an aluminum build, and a great fingerprint scanner make the Nexus 6P a very worthwhile choice for a phablet at a lower price than the latest and greatest flagships.
    Best High-End Android Smartphone: Samsung Galaxy S6

    Buy Samsung Galaxy S6 (32GB, Black) on Amazon.com
    While phablets have grown immensely in popularity, the normal flagship devices from the players in the Android space tend to be smaller than the 5.7-6.0" displays that ship on phablets. Not having to push a large size also opens up more opportunities to offer a great device at a lower price than the competition. Taking that into consideration, I think there are two key flagship devices that are worth considering if looking for a flagship phone in a typical size, along with one clear winner for a smartphone that offers a lot for a lower price than flagship smartphones.
    The Galaxy S6 really needs no introduction. Along with the Note5 it's really the only Android phone this year that was able to push the performance of Android devices forward, courtesy of its Exynos 7420 SoC. Along with still being the fastest Android phone around, the Galaxy S6 comes with a top notch 5.1" 2560x1440 AMOLED display, 3GB of LPDDR4 RAM, 32, 64, or 128GB of NAND, and the same 16MP camera that the Galaxy Note5 uses.
    It is a bit disappointing that the Galaxy S6 is still the fastest Android phone out there many months after it was released. While some may feel it's actually best to wait for the next generation Galaxy phone from Samsung, such a launch is still one or two quarters away, and if someone is looking to get the most powerful Android smartphone for the holidays the Galaxy S6 is definitely it. As far as the price goes, the fact that the S6 is a bit older now means you can find some appealing discounts. Right now on T-Mobile USA you can get the 32GB model for $579, and at $659 you get 128GB which is a pretty great deal. Like the Note5, I wouldn't recommend paying the extra money for the Edge version of the phone unless you really want the more unique design, as the ergonomics are honestly a downgrade.
    If you're looking for something a bit larger, or less expensive than the Galaxy S6, the LG G4 is definitely worth considering. Although it has a 5.5" display, it's much smaller than a phone like the iPhone 6s Plus due to its small bezels on all sides, and the use of on screen buttons. In my experience it's still a bit too big to be used comfortably in a single hand even with affordances like the back-mounted volume rocker, but it's not really a phablet either. As far as its specs go, you get Qualcomm's Snapdragon 808 SoC, 3GB of LPDDR3 RAM, 32GB of NAND, and a 16MP Sony IMX234 rear-facing camera. It also has microSD expansion and a removable battery for the users who were upset with Samsung's removal of those features on this year's Galaxy flagships.
    Price wise, the LG G4 sells for around $479, which is about $100 less than you'd pay for the Galaxy S6. The size of the phone is definitely worth considering in addition to the price, as the S6 is much easier to use with a single hand, but if you want a phone with a larger display without moving completely into phablet territory the G4 is definitely a phone to heavily consider.
    Best Mid-Range Android Smartphone: Google Nexus 5X

    Next we come to the lower cost high end, and here's there's only one real Android device worth mentioning, the Nexus 5X. This is actually my personal favorite Android device from this year, and I published my review of it last week. In many ways it's similar to the LG G4, which isn't surprising when you consider that it's made by LG. It has a Qualcomm Snapdragon 808 SoC, 2GB of LPDDR3 RAM, 16 or 32GB of NAND, and the same great 12MP camera that you get in the Nexus 6P.
    To sum up my thoughts on the Nexus 5X from my review, I'll say that it's imperfect, but I think it's unbeatable at $379. Snapdragon 808 doesn't deliver the performance jump that you'd expect from two years of technological advancement since the Nexus 5, but you still get a great display, an amazing camera, good battery life, a quick and simple fingerprint scanner, and a plastic but very solid chassis. The fact that the 5X includes the same camera as the Nexus 6P at its $379 price is really what gives it an edge, and if you're looking to get something smaller than a phablet without paying the $600-700 commanded by flagship phones I don't think you can go wrong with the Nexus 5X.
    Best Budget Android Phone: Motorola Moto G (2015)

    Buy Motorola Moto G (3rd Gen, 16GB, Black) on Amazon.com
    The last category on the list is the budget phone, which to me includes anything from $250 down, although $250 is certainly pushing it. There are certainly a large number of Android devices that fit this category, and I'm sure some people will feel that it makes the most sense to look at importing phones from Xiaomi rather than buying a phone from a more global brand where you may not get as much for your money. I can only really speak from experience, and I think importing comes with its own issues regarding the warranty, customs fees, and carrier compatibility. There was only one budget device from the big Android players that I looked at this year and feel is really worth considering, and it's the 2GB version of the 2015 Moto G.
    The 2015 Moto G comes in two versions. Both have a Qualcomm Snapdragon 410 SoC, a Sony IMX214 13MP camera, and a 1280x720 IPS display. However, while $179 gets you a version with 8GB of NAND and 1GB of RAM, $219 doubles both of those to 16GB and 2GB respectively. With the amount of RAM overhead created by Java applications that use garbage collection I really don't think 1GB is a usable amount of memory on an Android device unless you're shopping in the sub $100 range where you're not likely to be using many apps at all. For that reason, I think the 2GB model is the best budget smartphone, as it includes a relatively good camera for its price, has enough RAM, and should be fast enough for the needs of anyone shopping for a smartphone at this price. It's also waterproof, and has an extremely long battery life.
    While there are other budget Android phones, you end up having to pay significantly more than the Moto G to get any significant improvement, and dropping the price even lower ends up coming with a number of compromises that aren't worth the money you save.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    32,465
    Post Thanks / Like
    #5500

    Anandtech: NVIDIA Re-launches the SHIELD Tablet as the SHIELD Tablet K1

    The life of the NVIDIA SHIELD Tablet has had some ups and downs. Josh reviewed it last year, and at the time he found that NVIDIA's tech for game streaming offered an interesting value proposition. Unfortunately, NVIDIA was forced to issue a total recall on the tablets due to overheating concerns earlier this year, and while they shipped replacement devices to consumers, the SHIELD Tablet ended up being removed from sale. This was quite unfortunate, and it left a gap in the Android tablet market that I really haven't seen any vendor fill.
    Today NVIDIA is re-introducing the SHIELD Tablet with a new name. It's now called the SHIELD Tablet K1, something I hope implies we will soon see a SHIELD Tablet X1.
    While the name is new, we're looking at the exact same tablet that launched last year. I've put the specs in the chart below as a refresher.
    NVIDIA SHIELD Tablet K1
    SoC NVIDIA Tegra K1 (2.2 GHz 4x Cortex A15r3, Kepler 1 SMX GPU)
    RAM 2 GB DDR3L-1866
    NAND 16GB NAND + microSD
    Display 8” 1920x1200 IPS LCD
    Camera 5MP rear camera, 1.4 µm pixels, 1/4" CMOS size. 5MP FFC
    Diameter / Mass 221 x 126 x 9.2mm, 390 grams
    Battery 5197 mAh, 3.8V chemistry (19.75 Whr)
    OS Android 5.1.1 Lollipop
    Other Connectivity 2x2 802.11a/b/g/n + BT 4.0, USB2.0, GPS/GLONASS, Mini-HDMI 1.4a
    Accessories SHIELD DirectStylus 2 - $19.99
    SHIELD Controller - $59.99
    SHIELD Tablet K1 Cover - $39.99
    Price $199
    The NVIDIA SHIELD Tablet K1 still has NVIDIA's Tegra K1 SoC, with four Cortex A15 cores and the incredibly fast single SMX Kepler GPU. The SoC is paired with 2GB of LPDDR3 RAM and 16GB of NAND, with the original 32GB model being dropped. There's still microSD expansion for storing media, and with Android Marshmallow expandable storage will lose much of its third class status on Android which will be helpful.
    Of course, the biggest change here beyond the fact that the SHIELD Tablet is being put back on sale is its new price. At $199 it's $100 cheaper than when it first launched, and it makes it one of the only good tablets that you can actually get at that price point with the Nexus 7 having been gone for some time now. NVIDIA's optional accessories are all available as well, and if you plan to use the gaming features of the SHIELD Tablet K1 I would definitely factor the price of the controller into your cost consideration. In any case, it's good to see the SHIELD Tablet K1 back on sale, and at $199 I think it's definitely worth considering if you're looking for a tablet at that price.



    More...

Thread Information

Users Browsing this Thread

There are currently 11 users browsing this thread. (0 members and 11 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title