Page 556 of 1059 FirstFirst ... 564565065315465515525535545555565575585595605615665816066561056 ... LastLast
Results 5,551 to 5,560 of 10588

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: SanDisk Extreme 500 Portable SSD Capsule Review

    The last few years have seen rapid advancements in flash technology and the rise of USB 3.0 as an ubiquitous high-speed interface on computers. These have led to the appearance of small and affordable direct attached storage units with very high performance for day-to-day data transfer applications. We have already looked at some flash drives with SSD controllers and a USB 3.0 - SATA bridge over the last year.
    At Computex 2015, SanDisk announced a host of high-speed flash-based direct attached storage units. With the introduction of the Extreme 500, SanDisk became the second Tier 1 vendor to enter the external SSD market. SanDisk provided us with a 240GB Extreme 500 review sample.
    Hardware-wise, the Extreme 500 is based on the SanDisk SSD PLUS series. While SanDisk refuses to divulge controller and flash information, we did open up our review sample to get a look at the internal components. SanDisk did indicate that the particular controllers and memory may change over time, but are guaranteed to meet the performance specs.
    SATA Controller - Silicon Motion SMI2246XT, Flash - 4x 64GB SanDisk 05448 064GSATA - USB 3.0 Bridge - ASMedia ASM1153E
    The drive doesn't give up any information via either CrystalDiskInfo or the latest version of ChipGenius. However, SanDisk confirmed that the Extreme 500 does support TRIM and also does garbage collection as well when plugged in. There is no support for explicit over-provisioning, but there really is no advantage to doing it, since the drive already comes over-provisioned (given the 240 GB capacity with 4x64G flash chips).
    Testbed Setup and Testing Methodology

    Evaluation of DAS units on Windows is done with the testbed outlined in the table below. For devices with USB 3.0 connections (such as the Extreme 500 that we are considering today), we utilize the USB 3.0 port directly hanging off the PCH. However, SanDisk suggested reviewing with a USB 3.1 port, and hence, we also tested with an ASRock USB 3.1/A+C PCIe card attached to the primary PCIe x16 slot of the testbed below.
    AnandTech DAS Testbed Configuration
    Motherboard Asus Z97-PRO Wi-Fi ac ATX
    CPU Intel Core i7-4790
    Memory Corsair Vengeance Pro CMY32GX3M4A2133C11
    32 GB (4x 8GB)
    DDR3-2133 @ 11-11-11-27
    OS Drive Seagate 600 Pro 400 GB
    Optical Drive Asus BW-16D1HT 16x Blu-ray Write (w/ M-Disc Support)
    Add-on Card Asus Thunderbolt EX II
    Chassis Corsair Air 540
    PSU Corsair AX760i 760 W
    OS Windows 8.1 Pro
    Thanks to Asus and Corsair for the build components
    The full details of the reasoning behind choosing the above build components can be found here. The list of DAS units used for comparison purposes is provided below.

    • SanDisk Extreme 500 240GB - USB 3.1
    • SanDisk Extreme 500 240 GB - USB 3.0
    • Corsair Voyager GTX 256GB
    • Corsair Voyager GTX v2 256GB
    • Patriot Supersonic Rage 2 256GB
    • SanDisk Extreme 500 240GB - USB 3.0
    • VisionTek Pocket SSD 240GB

    Synthetic Benchmarks - ATTO and Crystal DiskMark

    SanDisk claims read and write speeds of 415 MBps and 340 MBps respectively, and these are backed up by the ATTO benchmarks provided below. Unfortunately, these access traces are not very common in real-life scenarios. Interestingly, we don't get these numbers with the USB 3.0 port.
    SanDisk Extreme 500 240GB - USB 3.1SanDisk Extreme 500 240 GB - USB 3.0Corsair Voyager GTX 256GBCorsair Voyager GTX v2 256GBPatriot Supersonic Rage 2 256GBSanDisk Extreme 500 240GB - USB 3.0VisionTek Pocket SSD 240GB
    CrystalDiskMark, despite being a canned benchmark, provides a better estimate of the performance range with a selected set of numbers. As evident from the screenshot below, the performance can dip to as low as 20 MBps for 4K accesses. Fortunately, those are not typical access traces for external drives.
    SanDisk Extreme 500 240GB - USB 3.1SanDisk Extreme 500 240 GB - USB 3.0Corsair Voyager GTX 256GBCorsair Voyager GTX v2 256GBPatriot Supersonic Rage 2 256GBSanDisk Extreme 500 240GB - USB 3.0VisionTek Pocket SSD 240GB
    Benchmarks - robocopy and PCMark 8 Storage Bench

    Our testing methodology for DAS units also takes into consideration the usual use-case for such devices. The most common usage scenario is transfer of large amounts of photos and videos to and from the unit. The minor usage scenario is importing files directly off the DAS into a multimedia editing program such as Adobe Photoshop.
    In order to tackle the first use-case, we created three test folders with the following characteristics:

    • Photos: 15.6 GB collection of 4320 photos (RAW as well as JPEGs) in 61 sub-folders
    • Videos: 16.1 GB collection of 244 videos (MP4 as well as MOVs) in 6 sub-folders
    • BR: 10.7 GB Blu-ray folder structure of the IDT Benchmark Blu-ray (the same that we use in our robocopy tests for NAS systems)

    For the second use-case, we take advantage of PC Mark 8's storage bench. The storage workload involves games as well as multimedia editing applications. The command line version allows us to cherry-pick storage traces to run on a target drive. We chose the following traces.

    • Adobe Photoshop (Light)
    • Adobe Photoshop (Heavy)
    • Adobe After Effects
    • Adobe Illustrator

    Usually, PC Mark 8 reports time to complete the trace, but the detailed log report has the read and write bandwidth figures which we present in our performance graphs. Note that the bandwidth number reported in the results don't involve idle time compression. Results might appear low, but that is part of the workload characteristic. Note that the same testbed is being used for all DAS units. Therefore, comparing the numbers for each trace should be possible across different DAS units.

    Performance Consistency

    Yet another interesting aspect of these types of units is performance consistency. Aspects that may influence this include thermal throttling and firmware caps on access rates to avoid overheating or other similar scenarios. This aspect is an important one, as the last thing that users want to see when copying over, say, 100 GB of data to the flash drive, is the transfer rate going to USB 2.0 speeds. In order to identify whether the drive under test suffers from this problem, we instrumented our robocopy DAS benchmark suite to record the flash drive's read and write transfer rates while the robocopy process took place in the background. For supported drives, we also recorded the internal temperature of the drive during the process. The graphs below show the speeds observed during our real-world DAS suite processing. The first three sets of writes and reads correspond to the photos suite. A small gap (for the transfer of the videos suite from the primary drive to the RAM drive) is followed by three sets for the next data set. Another small RAM-drive transfer gap is followed by three sets for the Blu-ray folder.
    An important point to note here is that each of the first three blue and green areas correspond to 15.6 GB of writes and reads respectively. Throttling, if any, is apparent within the processing of the photos suite itself.
    SanDisk Extreme 500 240GB - USB 3.1SanDisk Extreme 500 240 GB - USB 3.0Corsair Voyager GTX 256GBCorsair Voyager GTX v2 256GBPatriot Supersonic Rage 2 256GBSanDisk Extreme 500 240GB - USB 3.0VisionTek Pocket SSD 240GB
    It can be seen that the Extreme 500 manages to remain under 55 C even under heavy workload conditions. There is no throttling to be seen anywhere. This is not really surprising given the form factor of the unit.
    Concluding Remarks

    Coming to the business end of the review, the Extreme 500 continues SanDisk's tradition of improving the performance of their external USB 3.0 flash storage every year. As icing on the cake, we have them officially using a real SSD controller in the form of the Silicon Motion SMI2246XT inside. The performance of the drive leaves us with no doubt that it would be a decent portable OS drive (even though SanDisk doesn't advertise it for that purpose).
    Our major concern with the unit is that it refuses to perform up to expectations with the native USB 3.0 port on the Z97 PCH (at least in our testbed configuration). The dismal write performance with the USB 3.0 port is indeed quite strange. Since the Extreme 500 is a USB 3.0/5G device, it shouldn’t really get a bump out of the higher USB 3.1 Gen 2/10G bandwidth. We believe the issue could be just due to how the Z97 PCH USB 3.0 port negotiates speed with the USB bridge chip. Obviously, the ASM1153E on the Extreme 500 is able to perform well with the ASM1143 USB 3.1 controller on the add-on card.
    Minor points of concern include SanDisk indicating that the controllers and flash could change in future production runs and the inability of standard tools to recognize and take actions on the drive based on S.M.A.R.T data.

    The 240GB SSD PLUS is available on Amazon for $70 while a UASP USB 3.0 2.5" drive enclosure can be purchased for less than $15. At the $120 price point, the Extreme 500 does carry a significant premium (even when the bundled encryption software is considered). That said, the portability and industrial design of the Extreme 500 is much better than that of an off-the-shelf enclosure. In terms of performance, the Extreme 500 240GB manages to come out at the top of the pack for most common external drive usage scenarios and exhibits excellent performance consustency. We would like to see SanDisk exploring such high-performance platforms in a flash drive form factor.


  2. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: The OnePlus 2 Review

    In early 2014, there was a lot of excitement among Android enthusiasts for an upcoming smartphone called the OnePlus One. The company producing it was a Chinese manufacturer, and they were a new entrant to the smartphone space. OnePlus's marketing campaign was structured to generate excitement over the prospect of receiving similar specifications to a high end smartphone in a device that was priced substantially lower. Once the device launched, it was clear that OnePlus had delivered on that promise in some respects, but not as much in others. The performance and display quality were superb for a $300-350 device, but parts of the software and the camera processing were clearly lacking.
    While the OnePlus One wasn't perfect, there really aren't any smartphones that are. For $300-350, it certainly offered users shopping on a budget a lot of power for their money. With many aspects of the phone already being executed well, one would expect that OnePlus's next phone would serve to iron out the issues and improve on some of the original's failings. That brings us to 2015, with the OnePlus 2 serving as the new flagship smartphone from OnePlus. Read on for the full review of the OnePlus 2, and find out if it holds up as well as its predecessor.


  3. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Workstation Love at SuperComputing 15

    One of the interesting angles at Supercomputing 15 was workstations. In a show where high performance computing is paramount, most situations involve an offload of software onto a small cluster or an off-site mega-behemoth funded environment. However there were a few interesting under-the-desk systems offered by system integrators outside the usual Dell/HP/Lenovo conglomerates to tackle the taste buds of workstation aficionados, including high performance computing, networking and visualization.
    First is a system that a few of my twitter followers saw - a pre-overclocked box from BOXX. Not only was it odd to see an overclocked system at a supercomputing conference (although some financial environments crave low latency), but here BOXX had pushed it to the near extreme.
    To get more than a few MHz, the system needs to move away from Xeon, which this does, at the expense of ECC memory for the usual reasons. But here is an i7-5960X overclocked to a modest 4.1 GHz, 128GB of DDR4-2800 memory, and four GTX Titan X cards. Each of these cards is overclocked by around 10-15%, and both the CPU and GPUs are water cooled into a massive custom case. The side window panel is optional. Obviously to get the best of everything is still a little out of the reach for PCIe NVMe SSDs as well, but the case offers enough storage capacity for a set of drives in a mix of RAID 0/1/5/10 or JBOD.
    They had the system attached to a 2x2 wall of 4K monitors, all performing various 3D graphics and rendering workloads. Of course, the system aims to be a workstation wet dream so the goal here is to show off what Boxx can do with off the shelf parts before going fully custom in that internal water loop. I discussed with the agent about the range of overclocks, and they said it has to balance between speed and repair, such that if a part fails it needs to be a less often as well as quick and easy - hence why the CPU and GPUs were not on the bleeding edge. It makes sense.
    Microway also had a beast on show. What actually drew me to Microway in the first place was that they were the first company I saw showing a new Knights Landing (link) motherboard and chip, but right next to it was essentially a proper server in a workstation.
    Initially I thought it was an AMD setup, as I had seen quad Magny Cours/Istanbul/Abu Dhabi based systems in workstation like cases before. But no, that is a set of four E5-4600 v3 series CPUs in a SuperMicro motherboard. The motherboard is aimed at servers, but Microway has put it in a case, and each socket has a closed loop liquid cooler. Because these are v3 CPUs, there is scope for up to 72 cores / 144 threads using E5-2699 v3 processors, which is more similar to what you would see in a 4U rack based arrangement. Because this is in a case, and the board arrangement is such, PCIe coprocessor support is varied based in which PCIe root hub it comes from, but I was told that it will be offered with the standard range of PCIe devices as well as Intel's Omni-Path when those cards come to retail. Welcome to the node under the desk. You need a tall desk. I'm reaching out to Microway to get one for review, if only for perverse curiosity into workstation CPU compute power.
    So there's one node in a case - how about seven? In collaboration with Intel, International Computer Concepts Inc has developed an 8U half-width chassis that will take any half-width server unit up to a specific length.

    Each 1U has access to a 10GBase-T port and an internal custom 10G switch with either copper or fibre outputs depending on how you order it. In the example show to us, each 1U was supported by dual Xeon D nodes, which will offer up to 64 threads x 16 when fully populated with the next Xeon-D generation. Of course, some parts of the system can be replaced with storage nodes, or full-fat Xeon nodes. Cooling wasn't really discussed here, but I was told that the system should be populated to keep noise in mind - so giving each 1U a pair of GPUs probably isn't a good idea. The system carries it's own power backplane as well with dual redundant supplies up to 1200W if I remember correctly.
    With this amount of versatility, particularly when testing for a larger cluster (or even as an SMB deployment), it certainly sounds impressive. I'm pretty sure Ganesh wants one for his NAS testing.


  4. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: The EKWB EK-XLC Predator 240 Liquid Cooler Review

    Today we are having a look at the EK-XLC Predator 240, the first AIO liquid cooling solution from EKWB. EKWB is a company that specializes in and is known by their custom liquid cooling products, but with the EK-XLC Predator 240, the company is trying to bring the performance of their custom liquid cooling solutions to the AIO market. We are thoroughly examining and comparing their new product in this review.


  5. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: AMD's 2016 Linux Driver Plans & GPUOpen Family of Dev Tools: Investing In

    Earlier this month AMD’s Radeon Technologies Group held an event to brief the press of their plans for 2016. Part of a larger shift for RTG as they work to develop their own identity and avoid the mistakes of the past, RTG has set about being more transparent and forthcoming in their roadmap plans, offering the press and ultimately the public a high-level overview of what the group plans to accomplish in 2016. The first part of this look into RTG’s roadmap was released last week, when the company unveiled their plans for their visual technologies – DisplayPort/HDMI, FreeSync, and HDR support.
    Following up on that, today RTG is unveiling the next part of their roadmap. Today's release is focused around Linux and RTG’s developer relations strategy, with RTG’s laying out their plans to improve support on the former and to better empower developers on the latter. Both RTG’s Linux support and developer relations have suffered some from RTG’s much smaller market share and more limited resources compared to NVIDIA, and while I don’t think even RTG expects to right everything overnight, they do have a clear idea over where they have gone wrong and what are some of the things they can do to correct this.
    Linux Driver Support: AMDGPU for Open & Closed Source

    The story of RTG’s Linux driver support is a long one, and to put it kindly it has frequently not been a happy story. Both in the professional space and the consumer space RTG has struggled to put out solid drivers that are competitive with their Windows drivers, which has served to only further cement the highly favorable view of NVIDIA’s driver quality in the Linux market. Though I don’t expect RTG will agree with this, there has certainly been a very consistent element of their Linux driver being a second-class citizen in recent years.
    To that end, RTG has been embarking on developing a new driver over the past year to serve their needs in both the consumer and professional spaces, and for this driver to be a true first-class driver. This driver, AMDGPU, was released in its earliest form back in April, but it’s only this month that RTG has finally begun discussing it with the larger (non-Linux) technical press. As such there’s not a great deal of new information here, but I do want to spend a moment highlighting RTG’s plans thus far.
    AMDGPU is part of a larger effort for RTG to unify all of their Linux graphic driver needs behind a single driver. AMDGPU itself is an open source kernel space driver – the heart of a graphics driver in terms of Linux driver design – and is intended to be used for all RTG/AMD GPUs, consumer and professional. On the consumer side it replaces the previously awkward arrangement of RTG maintaining two different drivers, their open source driver and their proprietary driver, with the open source driver often suffering for it.
    With AMDGPU, RTG will be producing both a fully open source and a mixed open/closed source driver, both using the AMDGPU kernel space driver as their core. The pure open source driver will fulfill the need for a fully open driver for distros that only ship open source or for users who specifically want/need all open source. Meanwhile RTG’s closed driver, the successor to the previous Catalyst/fglrx driver, will build off of AMDGPU but add RTG’s closed user mode driver components such as their (typically superior) OpenGL and multimedia runtimes.
    The significant change here is that by having the RTG closed source driver based around the open source driver, the company is now only maintaining a single code base, is pushing as much as possible into open source, and that the open source driver is receiving these features far sooner than it was previously. This greatly improves the quality of life for open source driver users, but it’s also reciprocal for RTG: it’s a lot easier to keep up to date with Linux kernel changes with an open source kernel mode driver than a closed source driver, and quickly integrate improvements submitted by other developers.
    This driver is also at the heart of RTG’s plans for the professional and HPC markets. At SC15 AMD announced their Boltzmann initiative to develop a CUDA source code shim and the Heterogeneous Compute Compiler for their GPUs, all of which will be built on top of their new Linux driver and the headless + HSA abilities it will be able to provide. And it’s also here where RTG is also looking to capitalize on the open source nature of the driver, giving HPC users and developers far greater access than what they’re accustomed to with NVIDIA’s closed-source driver and allowing them to see the code behind the driver that’s interpreting and executing their programs.
    GPUOpen: RTG’s SDKs, Libraries, & Tools To Go Open Source

    The second half of RTG’s briefing was focused on developer relations. Here the company has several needs and initiatives, ranging from countering NVIDIA and their GameWorks SDKs/libraries to developing tools to better utilize RTG’s hardware and the heterogeneous system architecture. At the same time the group is also looking to better leverage their sweep of the current generation consoles, and turn those wins into a distinct advantage within the PC space.
    To that end, not unlike the RTG’s Linux efforts, the group is embarking on a new, more open direction for GPU SDK and library development. Being announced today is RTG’s GPUOpen initiative, which will combine RTG’s various SDKs and libraries under the single GPUOpen umbrella, and then take all of these components open source.
    Starting first with the umbrella aspect of GPUOpen, with GPUOpen we’re going to see RTG travel down the same path of bundled tools and developer portals that NVIDIA has been following for the last couple of years with GameWorks. To NVIDIA’s credit they have been very successful in reaching developers via this method – both in terms of SDKs and in terms of branding – so it makes a great deal of sense for RTG to develop something similar. Consequently, GPUOpen will be bringing RTG’s various SDKs, tools, and libraries such as TressFX, LiquidVR, CodeXL, and their DirectX code samples underneath the GPUOpen branding umbrella. At the same time the RTG is developing a GPUOpen portal to make all of these resources available from a single location, and will also be using that portal to publish news and industry updates for game developers.
    But more interesting than just bringing all of RTG’s developer resources under the GPUOpen brand is what RTG is going to do with those resources: set it free. While the group has dabbled with open source over the last several years, beginning with GPUOpen in 2016 they will be fully committed to it. Everything at the GPUOpen portal will be open source and licensed under the highly permissive MIT license, allowing developers to not only see the code behind RTG’s tools and libraries, but to integrate that code into open and closed source projects as they see fit.
    Previously RTG had offered some bits and pieces of their code on an open source basis, but those projects were typically using more restrictive licenses than MIT. GPUOpen on the other hand will see the code behind these projects become available to developers under one of the most permissive licenses out there (short of public domain), which in turn allows developers to essentially do whatever they please while also avoiding any compatibility issues with other open source licenses (e.g. GPL).
    Otherwise the fact that RTG is going to be placing so much of their code into the open source ecosystem is a big step for the group. Ultimately they believe that there is much to gain both from letting developers freely use this code, and from allowing them to submit their own improvements and additions back to RTG. In a sense this is the anti-GameWorks: whereas NVIDIA favors closed source libraries or limited sharing, RTG is placing everything out there in hope that their development efforts coupled with the ability for other developers to contribute via open source development will produce a better product.
    Finally, as part of the GPUOpen portal and RTG’s open source efforts, the company will be hosting these projects on the ever-popular GitHub platform, allowing developers to fork the projects and submit changes as they see fit. The portal and the GitHub repositories for the initial projects will be launching in January. And though RTG didn’t offer much in the way of details for their future plans, they have strongly hinted that this is just the beginning for them, and that they are developing additional effects libraries and code samples that will be made available in the not-too-distant future. Ultimately with the first DirectX 12 games shipping next year and with Vulkan expected to be finalized in 2016 as well, I wouldn’t be surprised to see the GPUOpen portal become the focal-point of RTG’s developer relations efforts for low-level GPU programming, while further leveraging the similarities between console development and low-level PC developer.
    Gallery: AMD GPUOpen 2016 Presentation


  6. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: The Google Nexus 6P Review

    As we have come to know, Google has opted to simultaneously release two Nexus smartphone devices this year. Alongside the smaller form-factor LG Nexus 5X which we’ve reviewed a couple of weeks ago, we also find the larger Huawei-built Nexus 6P. The Nexus 6P is the successor to last year’s Motorola Nexus 6. The new device also marks a first for Google’s Nexus line-up: the introduction of Huawei as a hardware partner.
    The symbiosis created by the collaboration between OEMs and Google for Nexus devices is quite unique in the market and is more similar to how ODMs operate. In the case of the Nexus devices the hardware vendors make their design resources and production lines available to Google. This usually means that a Nexus device from a given vendor will most of the time be remarkably similar in build to what the OEM offers for their own product lines at that moment in time. As we’ve seen in the past this has been valid for the last few generations of Nexus’, where for example the Nexus 6 took design queues from Motorola’s own Moto X devices or LG’s Nexus 5X sporting very typical LG build characteristics.
    The Nexus 6P is no different in this regard. Huawei has had a long history of producing metal frame devices and in the past few years has even made this a trademark design characteristic of their latest models. In terms of build the Nexus 6P clearly reminds of the Mate series and even has some design queues that are similar to the recently announced Mate 8.


  7. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Sony Digital Paper System DPT-S1 Review

    The e-reader market has lost some of its initial appeal due to the rapid rise in popularity of tablets and other similar mobile devices. However, 'tablets' with E-Ink screens continue to offer the best experience in terms of battery life as well as reducing eye strain. E-Ink screens have not scaled well in size, with the 6" screen size being the most popular and economical choice. Products with bigger screen sizes such as the Kindle DX (9.7") have not enjoyed market success due to pricing issues. Sony's Digital Paper System (DPT-S1) targets business users with a 13.3" E-Ink Mobius screen. It comes with a stylus / pen for taking notes and annotating PDFs. Is the Sony DPT-S1 right for you? How is the user experience with the digital paper system? This review will provide some answers.


  8. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: The Logitech MX Anywhere 2 Mouse: Portable Performance

    If you are like me, you prefer to use a mouse over a trackpad. Part of this stems back to being comfortable with a desktop PC, and part of it has to do with the inconsistent quality of trackpads over the years. I find myself more productive when I use a mouse, due to the increased accuracy and comfort of using one. There are likely people around who only exclusively use a trackpad, but for those of us who prefer a mouse, having a mouse makes working easier.
    This brings me to the Logitech MX Anywhere 2 mouse. This is the second generation of Logitech’s travel mouse to feature their DarkField Laser Tracking, and it brings quite a few changes over the original. But let’s start with the basics. The MX Anywhere 2 is a right-hand travel mouse, so it is more compact than Logitech’s desktop offerings. It is 61.6 mm (2.4”) wide, 34.4 mm (1.4”) high, and 100.3mm (3.9”) in length. Being a travel mouse, weight is important, and the MX Anywhere 2 comes in at just 106 grams (3.7 oz) so it should not be a burden to bring with you.
    The problem is when you are travelling, you don’t have much say on what the surface is you are going to be able to do your work on. Traditional optical mice can struggle to achieve any level of precision on very smooth surfaces since they use the imperfections on the surface to create a map of where they are and what direction they are moving in. Logitech’s answer to this is their DarkField laser technology. When used on a normal surface, such as wood or cloth, there is plenty of texture to go on, and one laser is used to light up the surface for the sensor. The smart bit of technology is when you use the mouse on a transparent surface such as glass. A second laser is used which helps illuminate the dust particles on the surface. The transparent surface is seen as black, and the dust is reflected as a bright image, almost making it appear to look like the night sky. Dust and scratches are cast as bright against a dark field, hence the name DarkField.
    DarkField Tracking (Source: Logitech Innovation Brief)
    The MX Anywhere 2 mouse has some different features compared to the original Anywhere Mouse MX. The new model is much lighter. The original version was 317 grams versus just 106 grams of the new model. Logitech has moved from a AA battery powering the first generation to a rechargeable 500 mAh 5 V battery, and it is charged from a micro USB cable. Considering the ubiquity of these cables, this makes a lot of sense. Logitech says that one minute of charge will give you one hour of use, and a full charge should last up to two months with six hours of use per day. One feature that is missing is a charge indicator on the mouse itself, but you can check the charge state using the Logitech software. There’s no way to do an accurate battery life test on this like we do with notebooks and smartphones, but so far I’ve charged the mouse when I first purchased it in October. Since then I’ve not charged it, and usually just left it powered on. The standby drain is very good, with the mouse lasting over two months so far on the first charge. I don’t use it six hours per day, but so far it appears Logitech has not been exaggerating.
    Anywhere MX (left) vs MX Anywhere 2 (right) styling
    Another change over the original mouse is the on/off switch. The Anywhere Mouse MX had a great on/off switch which was actually a slide cover which protected the lasers and sensor when closed, and closing this would also turn off the mouse. The new model dispenses with that and resorts to a more traditional switch, but the new one still has a physical sliding switch which is recessed in the underside of the mouse. This should prevent accidentally turning the mouse on in your bag.
    Anywhere MX (left) with sliding on/off cover plate vs MX Anywhere 2 (right) on/off slide switch
    The bottom of the mouse also houses another big change. The new mouse can now be paired with up to three devices, and by Bluetooth or an included Logitech unified receiver. This removes one of the biggest issues with the original mouse which was the lack of Bluetooth support. Mobile devices often do not have a lot of extra USB ports to house the receiver 24/7 so adding in the option of Bluetooth is exactly what was needed. Pressing another button on the bottom switches the Bluetooth pairing through the three devices so you can stay paired with whatever devices you need to use this on. It also supports the Logitech unifying receiver, so you can use that if you do not have access to Bluetooth.
    Other than those changes, this is definitely an evolutionary upgrade over the original Anywhere Mouse MX. The basic shape and button location has not changed, and it keeps the excellent Logitech scroll wheel which allows you to switch between a traditional stepped scrolling or an almost frictionless HyperScroll which lets you blast through spreadsheets or web pages with one flick of the finger.
    So with that out of the way, let’s talk about how the mouse works in the real world. The first part of this is the Logitech Options software which can be installed. It is not required, and if you don’t install it you just can’t customize the buttons, but Logitech has added enough here to make it worthwhile to install. The software is simple to use, and the layout is much better than older Logitech software which I found was cluttered.
    Once installed, you can customize the button layout to basically anything you can think of. One particular feature that I find handy is the ability to map key commands and macros to buttons. For instance, something I need to use a lot is Alt PrtScn and I can map that to one of the side buttons. It is very handy. You can also customize the button layout depending on what application you are in.
    The sensor DPI can be adjusted as well. Out of the box it is set at 1000 DPI, but it can be set anywhere from 400 to 1600 DPI in increments of 200. This is not a gaming mouse, so 1600 DPI is generally plenty in my experience and I prefer closer to 1000 for office tasks anyway.
    For such a small mouse, the ergonomics are pretty good, with a nice wide shape and contoured sides for your fingers to rest on. The button placement is perfect for right handed people (sorry lefties), and there is pretty low friction on most surfaces with the smooth plastic feet. The important bit though is how good is this mouse on surfaces that would normally stump an optical mouse. Logitech has been using DarkField for some time now, so I suppose it is no surprise that it has almost no issues on any surface, including glass. On a trip to NYC, the coffee table in the hotel room was glass, and I used it as my desktop with the Surface Pro 3. The Logitech MX Anywhere 2 had zero issues tracking, even on a glass surface.
    Everything in the box: Micro USB charging cable (yes you can charge and use it), mouse, receiver
    I do like the new revision quite a bit, but it’s too bad that Logitech has removed some of the features that were so nice on the original mouse. There is no longer a travel pouch included, which would be nice for something that needs to get chucked in a bag. The removal of the sliding door on the bottom as both a protective cover and an on/off switch is also too bad, especially with the pouch also being missed. But I think the inclusion of Bluetooth, with up to three profiles, is a net win for users. The small unifying receiver is great, but on a device like a Surface Pro 3, which has just a single USB port, occupying that port with a mouse seems a bit wasteful. As small as the receiver is, it does of course stick out of the side of the port, where as Bluetooth does not.
    I have been very pleased with the Logitech MX Anywhere 2 Mouse, and if you are someone who travels, and wants to use a mouse to supplement a trackpad on a notebook, this mouse is likely the travel mouse to beat. It’s small, light, and powerful, with great battery life, quick charging, and most importantly, it can be used practically anywhere.


  9. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Sapphire Readies Nitro R9 Fury: Custom Design and Enhanced Performance

    Sapphire Technology is preparing to release its new graphics card — the Nitro R9 Fury — based on AMD’s Fiji graphics processing unit that features its own custom printed circuit board. Sapphire claims that the new Nitro R9 Fury will feature enhanced durability and slightly higher performance. In addition, it is logical to expect the adapter to offer greater overclocking potential compared to reference boards.
    Sapphire is affiliated with PC Partner, a large Hong Kong-based holding, and is among the largest makers of graphics cards on the planet. The company works exclusively with AMD, which is why it offers a very comprehensive AMD Radeon product family. Sapphire’s lineup includes numerous unique products designed in-house that are not available from other manufacturers. However, when it comes to AMD’s Radeon R9 Fury family, Sapphire decided not to initially develop its own version of the more affordale R9 Fury (vanilla), unlike some other companies. The company tells us that they intentionally made the decision to introduce a Radeon R9 Fury featuring AMD’s reference design and a custom cooler that is both silent and efficient instead of developing its own PCB.
    PC Partner often acts as a contract manufacturer for AMD — it produces AMD FirePro and reference AMD Radeon R9 boards, which are then sold under various brands. As a result, unlike its rivals, Sapphire is less economically motivated to develop its own designs of printed circuit boards (PCBs) and transfer manufacturing to its own facilities as soon as possible to cut its costs down. This is another reason why Sapphire decided not to alter PCB design of the Radeon R9 Fury in mid-2015. Nonetheless, the company was not standing still and has developed its custom Nitro R9 Fury video card, which is expected to hit the market in January 2016.
    Sapphire’s Nitro R9 Fury features the maker’s own printed circuit board (PCB), which is noticeably different compared to AMD’s reference design. The PCB is significantly taller and longer than that developed by AMD, and features six-phase GPU voltage regulator module (VRM) that uses Sapphire’s own “Black Diamond” solid-state inductors with integrated heatsinks as well as high-quality tantalum 16K capacitors.
    The new VRM provides cleaner and more stable power to the graphics processing unit, according to Sapphire. Moreover, the revamped voltage regulator module is rated to deliver around 20% higher current than the reference, which should enable better overclocking potential for the GPU. Components of the VRM are placed in a way to ensure their efficient cooling, which is why maximum temperature of the power delivery circuitry is at least 15% lower under high load compared to that on the reference graphics card, claims the developer.
    The Sapphire Nitro N9 Fury retains AMD’s dual UEFI BIOS technology, hence, it should be possible for enthusiasts to relatively safely play with power limits or even try to unlock disabled stream processors. One of the BIOS settings allows increasing power and temperature limits to 300W and 80°C for greater overclocking potential
    The Sapphire Nitro R9 Fury utilizes the company’s massive triple-fan Tri-X cooler with seven heatpipes, intelligent fan control, an aluminum backlplate and die-cast mounting plate for great cooling and additional reliability. Since the Tri-X cooler is large, the Sapphire Nitro R9 Fury will require 12” of space inside PC cases. Just like the regular Radeon R9 Fury video cards, the new board will need two 8-pin PCIe auxiliary power connectors.
    Sapphire’s Nitro N9 Fury will sport one dual-link DVI connector, which should please owners of older displays. In addition, the card features three DisplayPort 1.2 and one HDMI 1.4 outputs.
    The Sapphire Nitro R9 Fury is based on a cut down version of AMD's Fiji GPU, with 3584 stream processors, 224 texture units as well as 64 ROPs. The GPU will run at 1050MHz, which is slightly above AMD’s recommendations, but owners will be able to overclock the chip further. Just like other Fiji-based offerings, the Nitro R9 Fury comes with 4GB of high-bandwidth (HBM) memory clocked at 1000MHz and providing 512GB/s of bandwidth.
    Sapphire yet has to announce the final price of the Nitro R9 Fury, but PC Games Hardware reports that the card will cost about the same amount of money as boards featuring AMD’s reference design.
    Gallery: Sapphire Readies Nitro R9 Fury: Custom Design and Enhanced Performance


  10. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Qualcomm renames Snapdragon 618,620 to 650,652

    Earlier in the year we were able to cover Qualcomm's announcement of the Snapdragon 618 and 620 mid-range SoCs. We haven't heard much about the SoCs since the original news, but today Qualcomm made the new announcement that there will be a product renaming. From now on, the Snapdragon 618 becomes the Snapdragon 650 and the Snapdragon 620 becomes the new Snapdragon 652. It seems that Qualcomm saw possibility of product lineup confusion between the existing Snapdragon 61x models and the new parts, and because the new models represent a significant boost in performance to previous generation A53-only SoCs, repositioned the parts with new higher numeric names that allows them to better differentiate themselves.
    Qualcomm 2016 Mid- to High-End
    SoC Snapdragon 650
    Snapdragon 652
    Snapdragon 820
    CPU 4x Cortex A53 @1440MHz

    2x Cortex A72 @1804MHz
    4x Cortex A53 @1440MHz

    4x Cortex A72 @1804MHz
    2x Kryo @1598MHz

    2x 32-bit @ 931MHz

    14.9GB/s b/w
    2x 32-bit @ 1803MHz

    28.8GB/s b/w
    Adreno 510
    @ 550MHz
    Adreno 530
    @ 624MHz
    2160p30, 1080p90
    H.264 & HEVC
    2160p30 (p60 decode),
    H.264 & HEVC
    Camera/ISP Dual ISP
    Dual ISP
    "X8 LTE" Cat. 7
    300Mbps DL 100Mbps UL

    2x20MHz C.A.
    (DL & UL)
    "X12 LTE" Cat. 12/13
    600Mbps DL 150Mbps UL

    3x20MHz C.A. (DL)
    2x20MHz C.A. (UL)
    Mfc. Process 28nm HPm 14nm LPP
    As a reminder, the Snapdragon 650 and 652 are big.LITTLE SoCs based on ARM's Cortex A72 acting as the big cores running at 1804MHz, and along with them comes ARM's Cortex A53 clocked at 1440MHz. While both SoCs have 4 little cores, the Snapdragon 652 sports 4 A72 cores while the smaller brother sports only 2.
    As a GPU we find an Adreno 510 clocked in at 550MHz. We still don't have much more information on how this new part will be able to perform compared to its predecessors, but we should be expecting levels along the line of the Adreno 418 in the current Snapdragon 808.
    The biggest difference between the Snapdragon 650/652 and the high end flagship, the Snapdragon 820 will be in terms of manufacturing node as we'll be seeing the former parts made on a mature 28nm HPm node while the Snapdragon 820 enjoys the more high-end 14nm LPP manufacturing process.
    At the time we expected the new parts to ship in devices by the end of the year but as we're closing in on the holiday season it looks like we'll have to wait for early 2016 until we can get our hands on the new products, so hopefully we'll be seeing announcements in the following months with availability shortly after.


Thread Information

Users Browsing this Thread

There are currently 6 users browsing this thread. (0 members and 6 guests)

Tags for this Thread


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts