Page 873 of 928 FirstFirst ... 373773823848863868869870871872873874875876877878883898923 ... LastLast
Results 8,721 to 8,730 of 9274

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Google Unveils 2018 Chromecast: 1080p at 60 fps, Chromecast Audio Support

    Google on Tuesday introduced its newest Chromecast dongle for media streaming. The updated device adds support for 60 fps streaming at 1080p, but does not support a 4K resolution, which is why the Chromecast Ultra remains Google’s top-of-the-range media player. In addition, the new dongle supports Chromecast Audio technology.
    The third-generation Chromecast is based on an SoC that is 15% faster when compared to the chip that powers the second-gen Chromecast dongle. These limited performance improvements naturally did not allow Google to significantly improve the feature-set of the device (e.g., add 4K streaming support). As a result, the only tangible streaming advantage that the new Chromecast has over its predecessor is support for 1080p60 video. In addition, the updated device will support Chromecast Audio functionality, which lets a Chromecast play back music in sync with other speakers connected to Google’s devices (this capability will be added later in 2018).
    When it comes to connectivity, the Chromecast continues to feature an HDMI interface, 802.11ac Wi-Fi support (it now supports both 2.4 GHz and 5 GHz frequencies, so it works a bit faster than its predecessor), and has a Micro-USB connector for power (5V, 1A) or an optional Ethernet adapter. As for compatibility, the Chromecast can work with devices running Android, ChromeOS, iOS, macOS, and Windows. Besides, the Chromecast can also work with Google's Home device, Google's Assistant speakers, and other smart home electronics.
    Just like before, the 2018 Chromecast device will retail for $35.
    Related Reading:


  2. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Honor 8X Hands-On: 6.5-inch Best Screen-Size Per Dollar

    Everyone is always interested in the next budget 'high-end killer' smartfone. In recent memory we've seen a number of Asian smartphone companies do it, with Flagship-like specifications always below $300. Honor's latest attempt sits at the bottom of its stack but boasts a 6.5-inch full screen display, an AI camera, and one of the latest chips inside from its big brother, Huawei.


  3. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Intel Launches "Vision Accelerator Design" Products, Based On Movidius Acc

    Intel on Wednesday introduced a new series of computer vision accelerators powered by a combination of Movidius chips and Arria FPGAs. Dubbed the Vision Accelerator Design series, the new devices are designed to simplify the development and manufacturing of devices featuring computer vision. The vision accelerators will be available to select customers of Intel.
    While Intel is not publicly announcing specific SKUs, the accelerators will be available with either Intel's Movidius VPUs or their Arria 10 FPGAs, and will come in PCIe, mini PCIe, and M.2 form factors. The Movidius-based devices are primarily intended for edge devices – think cameras, low-power servers, and the like – while the Arria 10 devices are higher power and meant for what Intel is dubbing "edge servers". Being based on Intel's existing hardware, there aren't any groundbreaking new features here, so each version of the accelerator is meant to play into that chip's natural strengths, be it the high efficiency of the VPU in very specific workloads, or the greater flexibility of the FPGA.
    Overall then, the accelerators have the same features as standalone chips, including in line processing, multistream aggregation, and a set of deep inference and sensor processing acceleration to edge devices. The products can be programmed using Intel’s Open Visual Inference & Neural Network Optimization (OpenVINO) toolkit.
    By offering vision accelerators in add-in-card form-factors, Intel is looking to simplify the design of products (e.g., cameras, servers, NVRs, etc.) that use these devices, and enable seamless upgrades when higher-performing accelerators become available.
    Dell, Honeywell, and QNAP will be Intel’s first customers to use the reference vision accelerator cards, but they do not disclose exact details. Based on comments from the said companies, Dell and Honewell intend to offer edge computing solutions powered by Intel’s cards.
    Meanwhile, this marks the latest product to come out of Intel using their Movidius VPUs. Last year Intel released its Movidius Neural Compute USB stick, so this would seem to be the next step by moving those VPUs on to an internal card.
    Related Reading:

    Source: Intel


  4. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: The Razer Phone 2 Hands On: Now With Wireless Charging, IP67, and RGB

    When Razer announced its Razer Phone as a 'gaming smartphone', a sizeable number scoffed at the idea - how can it be a gaming smartphone if everyone has the same flagship hardware? In Razer's own words,they were 'carving a new market' , with features like a 120Hz Ultramotion display and HDR, as well as a special fast chip under the hood. Razer says it easily met their sales expectations, and they are ready to announce the Razer Phone 2, a refined model with a number of extra requested features.


  5. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: A Sneak Peek at our Core i9-9900K Sample

    Before I had even landed back home from Intel's 9th Gen announcement event in New York, the Core i9-9900K sample from Intel had arrived. We still have some time before we can publish our results, but we can still take a look at how Intel packaged it up for us. A fair warning: we didn't get the d12 flashy case shown on stage.
    This One Doesn't Roll

    Shown on stage at Intel's event was a brand new flashy box design. It looks exactly like a 12-sided die, similar to those used in dice-based games such as D&D. Its full size isn't much more than the size of the hand.
    Here is Intel's Anand Srivatsa holding the flashy design on stage. Roll for initiative, I guess?
    Intel 9th Gen Core
    AnandTech Cores TDP Freq L3 L3 Per
    iGPU iGPU
    Core i9-9900K $488 8 / 16 95 W 3.6 / 5.0 16 MB 2.0 MB 2666 GT2 1200
    Core i7-9700K $374 8 / 8 95 W 3.6 / 4.9 12 MB 1.5 MB 2666 GT2 1200
    Core i5-9600K $262 6 / 6 95 W 3.7 / 4.6 9 MB 1.5 MB 2666 GT2 1150
    Intel is releasing three processors next week, although Intel is only sampling the Core i9-9900K.
    The Press Kit

    Instead of the dodecahedron, Intel plumped for a customized box. Inside, we were greated with our name.
    Intel's big thing for the Core i9 and the 9th generation parts is the motto 'performance unleashed', by moving up to eight cores on the Core i9 series. All of this inside is cardboard, with the processor underneath.
    Intel has supplied the press with engineering samples, that show off the S-Spec name of QQPP, and the base frequency of 3.60 GHz.
    There's not much else to say. Intel is sampling the Core i9-9900K only, and motherboard vendors have started shipping out samples. We have reviews of the motherboards soon, and our 9th gen review will be going live on the embargo day, October 19th.
    Buy Intel Core i9-9900K on

    Edit: Peek for Peak in title. Too many peeks.


  6. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Samsung Announces The 2018 Galaxy A9 With Four Rear Cameras

    We’re all very familiar with Samsung’s Galaxy S flagships, but there’s also a myriad of mid-range devices that the Korean firm also offers – trying to cover all consumer price ranges.
    Today’s announcement cover the new Galaxy A9. The A-series have been relatively popular in the past few years, offering a step down from the top S series in terms of features and price.
    The Galaxy A9 in particular is noteworthy in that it’s Samsung first smartphone that deploys four rear cameras.
    Samsung Galaxy A9 (2018)
    SoC "Quad core"
    4x 2.2GHz + 4x 1.8GHz
    Display 6.3-inch 2220x1080 (18.5:9)
    Dimensions 162.5 x 77 x 7.8 mm
    183 grams
    NAND 128GB
    Battery 3800mAh (14.63Wh)
    Front Camera 24MP f/2.0
    Rear Cameras Regular 24MP f/1.7
    Wide Angle 8MP f/2.4 - 120° FoV
    Telephoto 10MP f/2.4 - 2x Zoom
    Depth 5MP f/2.2
    SIM Size 2x NanoSIM (dedicated SIMs + microSD slots)
    Connectivity 802.11ac 2x2 WiFi
    BT 5.0, NFC
    Interfaces USB 2.0 Type-C; 3.5mm audio
    Launch OS Android O (8.0)
    Launch Price
    6GB/128GB: £549
    First of all, in terms of specifications, the new Galaxy A9 is the definition of a mid-range smartphone. Samsung doesn’t exactly specify which SoC the device uses, but discloses that it’s a 4x 2.2Ghz + 4x 1.8GHz design. There have been some reports that this might be a Snapdragon 660 in some markets, and there’s no matching Exynos at those frequency configurations. Still this would give the A9 plenty of performance, and is accompanied by 6GB of RAM, and a surprising 128GB base storage. The fact that Samsung is now upping the base storage to 128GB in such devices is really fantastic to see, and really ups the game in comparison with some other new flagship devices this year that still launch with 64GB.
    The screen is a large 6.3” 18.5:9 AMOLED screen with 2220 x 1080 resolution. This is a big device, coming in with a width of 77mm, 3mm wider than a Galaxy S9+. Battery wise there’s also a big 3800mAh battery, which should serve the phone well in terms of battery life.
    Now, the big key feature of the A9 are its four rear cameras. The A9 uses the trifecta of a regular angle module, a wide angle, and a telephoto module. On top of these three imaging cameras, there’s also a fourth camera that serves as a depth sensor.
    The main camera is a 24MP unit with an f/1.7 aperture. In daylight pictures the sensor uses the full 24MP resolution, while Samsung now introduces a pixel binning mode in low light scenarios where 2x2 pixels on the sensor result in a single logical pixel in the output image – effectively resulting in 6MP photos.
    The wide angle module is a first for Samsung. For long this has been an exclusive feature for LG devices, and it seems this year we’ll see a lot more vendors employ similar modules. The A9’s module is 8MP with fixed focus and an f/2.4 aperture – the key characteristic here is the 120° field of view.
    Finally, the telephoto lens is a 10MP unit with an f/2.4 aperture offering 2x optical zoom.
    We’ve seen dedicated depth cameras in past smartphones, most prominently from OnePlus. Now I can’t really think of any good reason as to why Samsung would use this 5MP depth sensor over the wide-angle unit when doing portrait mode depth sensing – the only possible explanation would be that the A9 would also support portrait mode shots on the wide-angle camera itself, which would be achievable through the dedicated sensor.
    While I did refer to the A9 as mid-range, probably Samsung is targeting the Galaxy A9 as more into the “super premium” category as the £549 launch price is above the usual range where we’d see the top A-series phones priced in


  7. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Apple Licenses PMIC Technologies & Hires Engineers from Dialog Semiconduct

    Apple and Dialog Semiconductor on Wednesday signed an agreement that will see Apple buy part of their long-term power management IC supplier, while continuing to do regular business with the remaining parts. Under the terms of the deal, Apple will be licensing a number of power management technologies from Dialog, hiring over 300 Dialog employees, and assuming control of several Dialog facilities in Europe. Meanwhile the deal also sets up Apple to be a long-term customer of the remainder of the company, with Dialog contracted to supply hardware through 2021. The two deals will net the semiconductor developer around $600 million in total.
    One of the key ways to keep power consumption of modern SoCs in check is by carefully managing their power supply using advanced power management ICs (PMICs). Historically, Apple has been using third-party power management ICs (including those from Dialog), but it looks like the company intends more directly produce and integrate at least some of this technology for future products. Apple has been investing in development of its own semiconductors since 2008, gradually expanding its range of products, IP portfolio, and engineering teams. So the addition of power management IP and developers fits well with Apple’s strategy of moving to in-house developed technologies.
    In a bid to enable development of its own PMICs, Apple will pay Dialog $300 million in cash for the parts of the business they are outright acquiring. The money will bring Apple 300 experienced employees, four Dialog facilities in Livorno (Italy), Swindon (U.K.), Nabern and Neuaubing (Germany), as well as an IP license for certain power management technologies. The aforementioned employees have worked closely with Apple for years, so they know requirements of the company.
    In fact, Apple reportedly established PMIC development centers in Munich (Germany) and California, which employed 80 engineers as of early-2017. Apple has never formally confirmed this and it is unknown how successful these teams were. In any case, with 300 engineers and three facilities, Apple’s R&D capabilities in this field will get considerably stronger.
    Separately, Apple will prepay $300 million for Dialog products that will be delivered starting from 2019 and ramping up in 2020 – 2021. The products in question are PMICs used for power management and charging, chips for audio subsystem, and “other mixed-signal integrated circuits”. Given the fact that Dialog does not say anything about supply agreement beyond 2021, it is not clear whether Apple intends to go fully vertical after this time, or if they simply aren't looking to sign (public) contracts farther out than this.
    Dialog says that $600 million from Apple will enable it to invest in development of various mixed-signal solutions for IoT, mobile, automotive, computing, and storage markets. Essentially, the company is looking to hasten their transition to a broader supplier of supporting hardware, allowing them to tap into new and growing markets while diversifying their product portfolio and income streams.
    Related Reading:

    Source: Dialog Semiconductor


  8. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Intel Veteran Becomes Chairman of Toshiba Memory

    Toshiba Memory Corp. (TMC) on Thursday disclosed that Stacy Smith, a former high-ranking exec of Intel, was appointed executive chairman, effective October 1. Mr. Smith brings a wealth of experience when it comes to strategic and operative management to the maker of NAND flash memory and products on its base.
    Stacy Smith worked from 1988 to 2018 at Intel, serving at different positions. Most recently, he was President of Manufacturing Operations and Sales, hosting Intel's Manufacturing day, and was formerly (2006-2016) Intel's CFO. As chief financial officer, he was responsible for corporate strategy, mergers & acquisitions, finances, and the Intel Capital investment arm. Before that, he was CIO and vice president for sales on EMEA.
    During Mr. Smith’s tenure as CFO, Intel not only transformed itself from a CPU maker to a provider of computing platforms, but also entered the market of storage devices with its SSDs, so Mr. Smith is not new to NAND flash business.
    By making Stacy Smith executive chairman, Toshiba Memory gets an experienced exec who knows how to set up corporate strategies and how to manage vertically-integrated semiconductor manufacturing companies. Furthermore, the appointment plays an important image-building role and ensures that Toshiba Memory can interact with its U.S. investors. Mr. Smith will join Yasuo Naruke, president and CEO of TMC, who has fab management and corporate management experience.
    Buy Toshiba OCZ RD400 256 GB on
    Related Reading


  9. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Western Digital Unveils iNAND MC EU321: a UFS 2.1 Drive Based on 96L 3D NA

    Western Digital has announced its new lineup of UFS 2.1-based embedded storage devices for smartphones, tablets, PCs, and other mobile applications. The new iNAND MC EU321 drives are based on the company’s 96-layer 3D NAND memory and offer performance comparable to that of entry-level SSDs.
    Western Digital’s iNAND MC EU321 embedded flash drives feature capacities between 32 GB and 256 GB, and are based around an in-house controller that supports a UFS 2.1 Gear3 2-lane interface. When it comes to performance, the EFDs are rated for up to 800 MB/s sequential read speed, up to 550 MB/s sequential write speed, and up to 50/52K random read/write IOPS, with these figures presumably including the impact of WD's proprietary iNAND SmartSLC 5.1 caching technology.
    The iNAND MC EU321 comes in a BGA package that measures by 11.5×13×1 mm, which is small enough for modern smartphones and tablets. Meanwhile, given capacities offered by the new flash drives, they can also be used for laptops, VR headsets, and other kinds of devices that can take advantage of high-performance storage.
    The new family of iNAND MC E321 embedded flash drives will complement other 3D NAND-based UFS 2.1 offerings from Western Digital and will enable the manufacturer to make such solutions slightly cheaper. Traditionally, memory suppliers do not disclose which specific customers will will use their EFDs, but we should see the first products using the drives shortly.
    Western Digital has been shipping commercial products based on 96-layer 3D NAND since at least April. The company introduced its first SSD based on this type of memory in July, so the ultra-dense flash from Western Digital is ready for all types of applications.
    Related Reading:

    Source: Western Digital


  10. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Apple iPhone XS Review Addendum: Small Core and NN Performance

    Last week we published our iPhone XS and XS Max review, in which went into great depth into the various aspects of the phones, especially into the section regarding the new-fangled A12’s CPU performance. However I wanted to dig a bit deeper into CPU performance than I had time for in the initial review, which I'm finally able to get around to now. The A12’s small cores were especially something I wanted to have in the article, as Apple's small cores haven't been very well investigated to date. As it’s still an important topic, I’m posting that part here as a pipeline as well as integrating it as an additional page in the review:
    The A12 Tempest µarch: A Fierce Small Core

    Apple had first introduced a “small” CPU core alongside the Twister cores in the A10 SoC, powering the iPhone 7 generation. We’ve never really had the opportunity to dissect these cores, and over the years there was a bit of mystery around them as to what they’re capable of.
    Apple’s introduction of a heterogeneous CPU topology in one sense was one of the biggest validations for Arm designs. Having separate low(er)-power CPUs on a SoC is a simple matter of physics: It’s just not possible to have bigger microarchitectures scale down power as efficiently as if you would just use a separate smaller block. Even in a mythical perfectly clock-gated microarchitecture, you would not be able to combat the static leakage present in bigger CPU cores, and thus this would come with the negative consequence of being part of the everyday power consumption on a device, even for small workloads. Power gating the big CPU cores, and instead shifting to much smaller CPU in contrast, helps alleviate static leakage, as well as (if designed as such) improving the dynamic leakage power efficiency.
    The Tempest cores in the A12 are now the third iteration of this “small” microarchitecture, and since the A11 they are now fully heterogeneous and work independently of the big cores. But the question is, is this actually the third iteration, or did Apple do something more interesting?
    The Tempest core is a 3-wide out-of-order microarchitecture: Already out of the gate this means it has very little to do with Arm’s own “little” cores, such as the A53 and A55, as these are simpler in-order designs.
    The Tempest core’s execution pipelines are also relatively few: There are just two main pipelines that are capable of simple ALU operations; meanwhile one of them also does integer and FP multiplications, and the other is able to do FP additions. Essentially we just have two primary execution ports to each of the more complex pipelines behind them. Meanwhile in addition to the two main pipelines, there’s also a dedicated combined load/store port.
    Now what is very interesting here is that this essentially looks identical to Apple’s Swift microarchitecture from Apple's A6 SoC. It’s not very hard to imagine that Apple would have recycled this design, ported it to 64-bit, and they now use it as a lean out-of-order machine serving as the lower power CPU core. If this is indeed Swift derived, then on top of the three execution ports described above, we should also find a dedicated port for integer and fp divisions, such as not to block the main pipelines whenever such an instruction is fed.
    The Tempest cores clock up to a maximum of 1587MHz and are served by 32KB instruction and data caches, as well as an increased shared 2MB L2 cache that uses power management to partially power down SRAM banks.
    In terms of power efficiency, the Tempest cores were essentially my prime candidate to try to get to some sort of apples-to-apples comparison between the A11 and A12 for power efficiency. I haven’t seen major differences in the cores besides the bigger L2, and Apple has also kept the frequencies similar. Unfortunately, "similar" isn't identical in this case; because the small cores on the A11 can boost up to 1694MHz when there’s only one thread active on them, I had no really good way to also measure performance at iso-frequency.
    I did run SPEC at an equal 1587MHz frequency by simply having a second dummy thread spinning on another core while the main workloads were benchmarking. And I did try to get some power figures through this method by regression testing the impact of the dummy thread. However the power was near identical to the figures I measured at 1694MHz. As a result I dropped the idea, and we'll just have to just keep in mind that the A11’s Mistal cores were running 6.7% faster in the following benchmarks:
    Much like on the Vortex big cores, the biggest improvements for the new Tempest cores are found in the memory-sensitive benchmarks. The benchmarks in which Tempest loses to Mistral are mainly execution bound, and because of the frequency disadvantage, there’s no surprise that the A12 lost in this particular single-threaded small core scenario.
    Overall, besides the memory improvements, the new Tempest cores looks very similar in performance to last year’s Mistral cores. This is great as we can also investigate the power efficiency, and maybe learn something more concrete about the advantages of TSMC's 7nm manufacturing process.
    Unfortunately, the energy efficiency improvements are somewhat inconclusive, and more so maybe disappointing. Looking at the SPECint2006 workloads overall, the Tempest-powered A12 was 35% more energy efficient than the Mistral-powered A11. Because the Mistral cores were running at a higher frequency in this test, the actual efficiency gains for A12 would likely be even less at an ISO-frequency level. Granted, we’re still looking at a general ISO-performance comparison here, as the memory improvements in A12 were able to push the Tempest cores to an integer suite score nearly identical to the higher-clocked Mistral cores.
    In the overall FP benchmarks, Tempest was only 17% more efficient, even though it did perform better than the A11’s Mistral cores.
    Putting the A11 and A12 small cores in comparison with their big brothers as well as the competition from Arm, there’s not much surprise in terms of the results. Compared to the big Apple cores, the small cores only offer a third to a fourth of the performance, but they also use less than half the energy.
    What did surprise me a lot was seeing just how well Apple’s small cores compare to Arm’s Cortex-A73 under SPECint. Here Apple’s small cores almost match the performance of Arm’s high-performance cores from ust 2 years ago. In SPEC's integer workloads, A12 Tempest is nearly equivalent to a 2.1GHz A73.
    However in the SPECfp workloads, the small cores aren’t competitive. Not having dedicated floating-point execution resources puts the cores at a disadvantage, though they still offer great energy efficiency.
    Apple’s small cores in general are a lot more performant that one would think. I’ve gathered some incomplete SPEC numbers on Arm’s A55 (it takes ages!) and in general the performance difference here is 2-3x depending on the benchmark. In recent years I’ve felt that Arm’s little core performance range has become insufficient in many workloads, and this may also be why we’re going to see a lot more three-tiered SoCs (such as the Kirin 980) in the coming future. As it stands, the gap between the maximum performance of the little cores and the most efficient low performance point of the big continues to grow into one direction. All of which makes me wonder whether it’s still worth it to stay with an in-order microarchitecture for Arm's efficiency cores.
    Neural Network Inferencing Performance on the A12

    Another big, mysterious aspect of the new A12 was the SoC's new neural engine, which Apple advertises as designed in-house. As you may have noticed in the die shot, it’s a quite big silicon block, very much equaling the two big Vortex CPU cores in size.
    To my surprise, I found out that Master Lu’s AImark benchmark also supports iOS, and better still it uses Apple's CoreML framework to accelerate the same inference models as on Android. I ran the benchmark on the latest iPhone generations, as well as a few key Android devices.
    Overall, Apple’s 8x performance claims weren’t quite confirmed in this particular test suite, but we see solid improvements of 4-6.5x. There’s one catch here in regards to the older iPhones: as you can see in the results, the A11-based iPhone X performs quite similarly to previous generation phones. What’s happening here is that Apple’s executing CoreML on the GPU. It seems to me that the NPU in the A11 might have never been exposed publicly via APIs.
    The Huawei P20 Pro’s Kirin 970 falls roughly 2.5x behind the new A12 – which coincidentally exactly matches the advertised 2TOPs vs 5TOPs throughout capabilities of both SoC’s respective NPUs. Here the new Kirin 980 should be able to significantly close the gap.
    Qualcomm’s Snapdragon 845 also performs very well, trading blows with the Kirin 970. AImark uses the SNPE framework for inference acceleration, as it doesn’t support the NNAPI as of yet. The Pixel 2 and Note9 offered terrible results here as they both had to fall back to CPU accelerated libraries.
    In terms of power, I’m not too comfortable publishing power on the A12 because of how the workload was visibly transactional: The actual inferencing workload bumped up power consumption up to 5.5W, with lower gaps in-between. Without actually knowing what is happening in-between the bursts of activity, the average power figures for the whole test run can vary greatly. Nevertheless, the fact that Apple’s willing to go up to 5.5W means that they’re very much pushing the power envelope here and going for the highest burst performance. The GPU-accelerated iPhone’s power peaked in the 2.3W to 5W range depending on the inference model.


Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Tags for this Thread


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts