Page 663 of 777 FirstFirst ... 163563613638653658659660661662663664665666667668673688713763 ... LastLast
Results 6,621 to 6,630 of 7770

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Dell Announces UP3218K: Its First 8K Display, Due in March

    Dell introduced the industry’s first mass-market 8K display aimed at professional designers, engineers, photographers and software developers. The UP3218K will be available this March, but its rough $5,000 price tag will be rather high even for professionals dealing with content creation. That being said, $5K or so was the price that the original 4K MST monitors launched at in 2013, which perhaps makes this display price more palatable. On the other hand, right now an 8K professional display is such a niche product that the vast majority of users will have to wait a few years to see the price come down.
    Up to now, 8K reference displays were available only from Canon, in very low quantities and at very high prices. The displays were primarily aimed at video professionals from TV broadcasting companies like NHK, who are working on 8K (they call it Super Hi-Vision) content to be available over-the-air in select regions of Japan next year. A number of TV makers have also announced their ultra large 8K UHDTVs, but these are hardly found in retail. Overall, Dell is the first company to offer an 8K display that can be bought online by any individual with the money and be focused on the monitor market rather than TVs.
    At present, Dell is not publishing the full specifications of its UltraSharp 32 Ultra HD 8K monitor (UP3218K), but reveals key specs like resolution (7680×4320), contrast ratio (1300:1), brightness (400 nits), pixel density (280 ppi) as well as supported color spaces: 100% Adobe RGB and 100% sRGB.
    Preliminary Specifications
    Dell UltraSharp 32 Ultra HD 8K (UP3218K)
    Panel 32" (IPS?)
    Resolution 7680 × 4320
    Brightness 400 cd/m²
    Contrast Ratio 1300:1
    Refresh Rate 60 Hz
    Viewing Angles 178°/178° horizontal/vertical
    Color Saturation 100% Adobe RGB
    100% sRGB
    Display Colors 1.07 billion
    Inputs 2 × DisplayPort 1.3
    For interconnection with host PCs, as a single DisplayPort 1.3/1.4 cable does not provide enough bandwidth for the 7680×4320@60 Hz configuration Dell is targeting, the UltraSharp UP3218K uses two DisplayPort 1.3 inputs to provide the necessary bandwidth, stitching the two display streams together internally using tiling. This is similar to early 5K displays, which used a pair of DisplayPort cables to get around the bandwidth limitations of DisplayPort 1.2. Using two cables not a big problem given the target market, but it's interesting to note that because 7680×4320@60Hz alone consumes all of the bandwidth supplied by the two cables, there isn't any leftover bandwidth to support HDR or the Rec. 2020 color space.
    On a side note, while the company could have used DisplayPort 1.4's Display Stream Compression 1.2 (DSC) feature to reduce the bandwidth requirements of the monitor, they opted not to. DSC is promoted as visually lossless, but given how demanding many professionals are and problems that potential artifacts introduced by DSC could bring, Dell decided to stick to two DP 1.4 cables as a result.

    While a high display resolution is good for photos and images, it also makes everything smaller; and while modern operating systems support scaling, it does not work perfectly for all programs. It's likely that professional applications like AutoCAD or Photoshop will support 8K the day the UltraSharp UP3218K hits the market, but general use applications, already struggling with 4K and HiDPI in general, will be another matter. Practically speaking, if the price tag alone isn't convincing enough that this is a monitor for specific editing tasks and not for general desktop usage, then the lack of good HiDPI support elsewhere will. And while I'm sure someone will try to use the UP3218K as a gaming display, at four times the resolution of a 4K monitor, we're at least a few years off from GPUs being able to render high-fidelity games at a 33Mpix resolution.
    Dell promised to start sales of the Dell UltraSharp 32 Ultra HD 8K monitor (UP3218K) on March 23 on its web-site. Initially, the monitor is stated to cost $4999. Time to put in some hardware requisition forms.
    Gallery: Dell Announces UP3218K: Its First 8K Display, Due in March

    Related Reading:


  2. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: NVIDIA Launches SHIELD TV: Smart Home Functionality, More 4K HDR Streaming

    NVIDIA at CES launched its updated SHIELD set-top-box (STB), with an expanded feature-set as well as smaller and lighter form-factor. The upcoming NVIDIA SHIELD TV is based on the same Tegra X1 SoC as the previous-generation model launched in 2015, and like the mid-generation refreshes of gaming consoles that NVIDIA is clearly aiming to mimic, the launch of this slimmer console with updated software is meant to offer the hardware in a new form factor while calling attention to the major software updates being delivered to the platform this year. Among new features for the platform set to premiere in the near future is support for Google Assistant, compatibility with the SmartThings infrastructure and NVIDIA’s upcoming Spot wireless microphone.
    NVIDIA launched the SHIELD Android TV back in the spring of 2015 as a quirky combination of lightweight Android gaming console and Android TV set-top-box, and the device quickly earned recognition as the most powerful and capable Android TV device on the market. Thanks to the versatility of the Tegra X1 SoC in general and its Maxwell display controller in particular, NVIDIA launched with a box capable of displaying 4Kp60 content, and managed to further improve feature-set of its STB over the last two years. In particular, NVIDIA added HDR display and streaming support, and was among the first devices to be certified for 4Kp60 streaming from Netflix and other over-the-top streaming services. While the original SHIELD Android TV still has a lot of potential, some of the things are being introduced with a new model, which is what SHIELD TV is all about.
    From a hardware specifications point of view, the new SHIELD TV is essentially a cut-down version of the original 2015 model. The new device contains the same Tegra X1 SoC, similar RAM/storage configuration, and much the same I/O, etc. What NVIDIA has done away with is the 2.5" HDD bay (used in the Pro model), along with the microSD card slot and a micro-USB 2.0 port. Instead the device's expandability and connectivity is delivered through the use of two USB 3.0 ports, along with Gigabit Ethernet and 2x2 802.11ac WiFi. As a result, the new SHIELD TV is considerably smaller and lighter than the original SHIELD Android TV thanks to the space savings (especially removing the HDD bay).
    SHIELD Android TV
    SoC Tegra X1 (4 × Cortex A57 + 4 × Cortex A53, Maxwell 2 SMM GPU)
    RAM 3 GB LPDDR4-3200
    Storage 16 GB NAND
    16 GB NAND
    500 GB HDD
    16 GB NAND
    500 GB HDD (Pro only)
    Display Connectivity HDMI 2.0b with HDCP 2.2 (4Kp60, HDR)
    Dimensions Height 98 mm
    3.858 inch
    130 mm
    5.1 inch
    Width 159 mm
    6.26 inch
    210 mm
    8.3 inch
    Depth 26 mm
    1.02 inch
    25 mm
    1 inch
    Weight 250 grams 654 grams
    Power Adapter 40 W
    I/O Wireless 2x2 802.11a/b/g/n/ac
    Bluetooth 4.1/BLE
    USB 2 × USB 3.0 2 × USB 3.0
    1 × micro-USB 2.0
    IR IR Receiver
    Ethernet Gigabit Ethernet
    Launch Product Bundle Shield Controller
    Shield Remote
    Shield Controller
    Launch Price $199.99 $299.99 Basic: $199.99
    Pro: $299.99
    Another notable difference between the 2017 SHIELD TV and the 2015 SHIELD Android TV packages is the new gamepad controller, which loses some weight, ditches touchpad, but gains a microphone for the Google Assistant. The latter allows finding content, control playback and locating other information using voice commands. Technically, the original SHIELD Android TV also has a microphone on the remote, but it needs to manual activation, whereas the one on the new controller is “always on.” In fact, the new SHIELD controller can be bought separately to add hands-free commands to NVIDIA's SHIELD Android TV.
    Up next is smart home functionality. Though this isn't making the initial Android 7.0 software release for the device, the SHIELD TV family will be receiving Google Assistant functionality later this year. This expands on the STB's original voice control functionality, particularly with always-on functionality. The old and new SHIELD TV STBs will also be able to act like the SmartThings Hub ($99 when sold separately), and when combined with an appropriate radio dongle, can communicate with compatible devices (such as Nest) using Zigbee and Z-Wave communication protocols.
    Meanwhile, later this year NVIDIA plans to release its Spot device. The $50 Spot is a wireless microphone and a speaker that can be plugged in any power outlet within a home, relaying voice commands to SHIELD TV. The Spot is meant to further improve the usefulness of the SHIELD TV as a smart home hub by expanding the range over which it can hear commands for Google Assistant and compatible SmartThings devices.
    Finally, NVIDIA announced that the SHIELD TV software update launching alongside the new hardware will also add support for 4K HDR content from both Amazon Video and Google Play Movies (will be available in the next few months), further expanding the selection of 4Kp60 and 4K HDR content on the platform. Speaking of HDR, NVIDIA's GameStream HDR functionality, first introduced in beta form last May, will also finally be moved to release status in the software update.
    The new SHIELD TV STBs will come with NVIDIA's remote and the new gamepad by default. NVIDIA will ship the new SHIELD TV STB on January 16 and the systems are already available for pre-order at $199.99. The SHIELD TV Pro model with 500 GB HDD is priced at $299.99 and will ship on January 30.
    Buy NVIDIA SHIELD TV (2017) on
    Gallery: NVIDIA Launches SHIELD TV: Smart Home Functionality, 4K HDR Game Streaming

    Related Reading:


  3. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: GIGABYTE Quietly Launches Low Profile GeForce GTX 1050, 1050 Ti Graphics C

    GIGABYTE has quietly added two low-profile video cards to its lineup of products during CES. The graphics adapters are based on NVIDIA’s GeForce GTX 1050-series GPUs and will be among the most affordable gaming-grade products in the company’s family. The new low-profile add-in-cards will be especially useful for those building mini PCs or HTPCs as well as for those upgrading inexpensive computers from OEMs.
    GIGABYTE’s GeForce GTX 1050 Ti OC Low Profile 4G and GeForce GTX 1050 OC Low Profile 2G graphics adapters are based on NVIDIA’s GP107 GPU (albeit, in different configurations) and carry 4 GB and 2 GB of GDDR5 memory running at 7 Gbps, respectively. Both cards use the same PCB (marked as V16156-0) as well as the dual-slot cooling system featuring an aluminum heatsink and a fan. As for connectivity, the boards also have a similar set of outputs: one DL-DVI-D, two HDMI 2.0b and one DisplayPort 1.4 with HDCP 2.2 support that is required for Ultra HD Blu-ray playback.

    It is noteworthy that GIGABYTE decided to slightly increase GPU clock-rates of its low-profile GeForce GTX 1050-series graphics cards versus NVIDIA’s reference designs to give them some extra punch over rivals. Meanwhile, TDP and power requirements of its low profile GTX 1050-series graphics cards remained at approximately 75 W level, which means no additional power connectors are required and the cards can be installed into any contemporary computer with a PCIe x16 slot.
    Specifications of Low Profile GeForce GTX 1050-Series Graphics Cards
    1050 Ti OC
    LP 4G
    1050 Ti
    LP 4G
    1050 OC
    LP 2G
    GTX 1050
    LP 2G
    SKU GV-N105TOC-4GL GV-N1050OC-2GL
    Stream Processors 768 640
    Texture Units 48 40
    ROPs 32
    Core Clock (MHz) 1303 - 1328 1290 1366 - 1392 1354
    Boost Clock (MHz) 1417 - 1442 1392 1468 - 1506 1455
    Memory Capacity 4 GB 2 GB
    Type GDDR5
    Clock 7 Gbps
    Bus Width 128 bit
    Outputs DisplayPort 1 × DP 1.4
    DVI 1 × DVI-D
    HDMI 2.0b 2 1 2 1
    TDP 75 W
    Launch Date 1/2017 11/2016 1/2017 11/2016
    It seems like low profile graphics cards are back courtesy of NVIDIA’s GeForce GTX 1050-series as GIGABYTE is the second company to announce such parts after MSI, and it likely that these two companies will not be the only suppliers of such products. For those building low-power HTPCs or upgrading entry-level PCs, the GP107-based graphics adapters seem to be a good choice because the GPU supports DirectX 12 and Vulkan APIs as well as has an advanced media playback engine that features hardware-accelerated decoding and encoding of H.265 (HEVC) video.
    GIGABYTE does not specify MSRPs for its GeForce GTX 1050 Ti OC Low Profile 4G and GeForce GTX 1050 OC Low Profile 2G graphics adapters on its web-site, as these are typically determined at regional release. Given the positioning of these products, it unlikely that they will cost significantly more than NVIDIA’s MSRPs for similar video cards: $139 for the GTX 1050 Ti and $109 for the GTX 1050.
    Gallery: GIGABYTE Quietly Launches Low-Profile GeForce GTX 1050, 1050 Ti Graphics Cards

    Related Reading:


  4. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: AMD Set to Launch Ryzen Before March 3rd, Meeting Q1 Target

    Thanks to some sleuthing from various readers, AMD has accidentally let the cat out of the bag with regards to the official Ryzen launch date. While they haven’t specifically given an exact date, the talk to be given by AMD at the annual Game Developer Conference (GDC) says the following:
    Join AMD Game Engineering team members for an introduction to the recently-launched AMD Ryzen CPU followed by advanced optimization topics.
    The GDC event runs from February 27th to March 3rd, and currently the AMD talk is not on the exact schedule yet, so it could appear any day during the event (so be wary if anyone says Feb 27th). At this time AMD has not disclosed an exact date either, but it would be an interesting time to announce the new set of Ryzen CPUs right in the middle of both GDC and Mobile World Congress which is also during that week. It would mean that Ryzen news may end up being buried under other GDC and smartphone announcements.
    Then again, the launch could easily be anytime during February – this March 3rd date only really puts an end-point on the potential range. AMD has stated many times, as far back as August, that Q1 is the intended date for launch to consumers in volume. When we spoke with AMD at CES, nothing was set in stone so to speak, especially clock speeds and pricing, but we are expecting a full launch, not just something official on paper. Ryan will be at GDC to cover this exact talk, and I’ll be at MWC covering that event. Either way, we want to make sure that we are front of the queue when it comes time to disclosing as much information as we can get our hands on ahead of time. Stay tuned!


  5. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Corsair’s Bulldog 2.0 Gets Kaby Lake-Compatible Z270 Motherboard, New Cool

    Corsair introduced its new Bulldog 2.0 small form-factor HTPC console-like barebones kit at CES. The new Bulldog 2.0 received a new motherboard based on the Intel Z270 PCH with improved features, as well as a new CPU liquid cooling system that is said to be quieter compared to the predecessor. At the same time, the kit retained its visual design and a relatively moderate price point.
    Corsair’s Bulldog case blends enthusiast-class performance and features with a living room aesthetics, which is a rather rare combination. The Bulldog chassis can accommodate a mini-ITX motherboard, a full-height graphics card (which is not longer than 300 mm and is not thicker than 90 mm), two liquid cooling systems, a 3.5” HDD, up to three 2.5” storage devices, multiple fans as well as an SFX power supply. At CES Corsair demonstrated its new Bulldog 2.0 barebones kit featuring MSI’s Z270I Gaming Pro Carbon AC motherboard, its own new Hydro H6 SF low-profile cooler as well as the SF600 PSU.
    Since the Bulldog 2.0 uses almost the same chassis as the initial product, the key differentiators of the new barebones kit is the mainboard and the LCS. The latter is not yet available separately and the company even has not published its specs. The only thing that Corsair says about the H6 SF is that it is quiet even when it has to cool down an overclocked CPU, which is not really a detailed description. As for the Z270I Gaming Pro Carbon AC motherboard, it comes with an LGA1151 socket supporting both Kaby Lake-S and Skylake-S processors, two DIMM slots for up to 32 GB of DDR4 memory, a PCIe 3.0 x4/NVMe M.2-2280 slot for SSDs and a PCIe x16 slot for graphics cards. The motherboard is equipped with the new ASMedia ASM2142 controller (uses PCIe 3.0 x2 interface and thus provides up to 16 Gbps of bandwidth to two USB 3.1 Gen 2 ports) powering USB 3.1 Gen 2 Type-A/C headers, Intel’s dual-band Wireless-AC 8265 module (Wi-Fi 802.11ac + BT 4.2), Intel’s I219-V Gigabit Ethernet controller, a 7.1-channel Realtek ALC1220-based audio sub-system, SATA connectors and so on.
    Corsair Bulldog 2.0 Barebones Kit: Quick Specs
    Motherboard MSI Z270I Gaming Pro Carbon AC
    CPU Cooler Corsair Hydro H6 SF
    PSU Corsair SF600 (600 W 80 Plus)
    Dimensions (W×H×D) 457 mm × 133 mm × 381 mm
    Weight 5 kilograms
    Motherboard Form-Factor Mini-ITX
    PSU Form-Factor SFX
    3.5" Drive Bays 1
    2.5" Drive Bays 1 if 3.5" drive is installed
    3 if 3.5" bay is unused
    System Fans 2 × 92 mm (included)
    1 × 120 mm
    CPU Cooler Dimensions Up to 90 mm in height
    Graphics Card Length 300 mm
    PSU Length 130 mm
    External Connectors Power, Audio, USB 3.0, USB 3.1, Display, etc
    The motherboard looks to be more advanced than the one installed into the first-gen Bulldog as it is based on the latest Intel Z270 PCH, supports Optane Memory caching, a newer audio codec and an improved USB 3.1 (10 Gbps) controller. If the H6 SF LCS is really quieter than the predecessor, then the Bulldog 2.0 has a nice set of improvements over the first version.
    The refined Corsair Bulldog 2.0 barebones kit will be available shortly for $399.99, the price point of the first-gen product. In addition, select PC makers and retailers will offer their PCs based on the Bulldog 2.0 that will cost according to their specifications: The higher-end models will use MSI’s liquid-cooled Hydro GFX GTX 1080 graphics cards along with Intel's K-processors, whereas more affordable builds will use something less extreme for an SFF system.
    Gallery: Corsair’s Bulldog 2.0 Gets Kaby Lake-Compatible Z270 Motherboard, New Cooler

    Related Reading:


  6. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: SK Hynix Announces 8 GB LPDDR4X-4266 DRAM Packages

    SK Hynix on Monday officially announced the industry’s first 8 GB LPDDR4X (LP4X) packages for next-generation mobile devices. The new memory chips not only increase DRAM performance but also reduce its power consumption due to lower I/O voltages (and come in a smaller form-factor). Interested parties have already obtained samples of SK Hynix’s LPDDR4X ICs and the first devices featuring the new type of memory are expected to hit the market in the coming months.
    The LPDDR4X is a new mobile DRAM standard that is an extension of the original LPDDR4, and is expected to reduce power consumption of the DRAM sub-system by 18~20% according to developers (everything else remains the same: a 200~266 MHz internal memory array frequency, 16n prefetch, etc.). To do that, LPDDR4X cuts output driver power (I/O VDDQ voltage) by 45%, from 1.1 V to 0.6 V. LPDDR4X is supported by a number of mobile SoC developers. The first application processor to support the new type of memory is MediaTek’s Helio P20 that was announced nearly a year ago and the initial devices powered by the chip are likely to hit the market in 1H 2017. Another notable SoC to support LPDDR4X is Qualcomm’s new flagship Snapdragon 835, which was announced in November and detailed earlier this month. Smartphones featuring this chip will not show up for a while, but MWC just around the corner which lends nicely to various handset announcements.
    The 8 GB (64 Gb) LPDDR4X package stacks four 16 Gb DRAM parts that feature a 4266 MT/s data transfer rate and provide up to 34.1 GB/s of bandwidth when connected to an application processor using a 64-bit memory bus. For its 8 GB LPDDR4X solution SK Hynix uses a new 12 mm × 12.7 mm BGA package, which is 30% smaller compared to standard LPDDR4 stacks that come in 15 mm × 15 mm form-factor. SK Hynix’s 8 GB LPDDR4X solution has a thickness of less than 1 mm to enable PoP stacking with a mobile application processor or a UFS NAND storage device.
    SK Hynix 8 GB LPDDR4X DRAM Packages
    DRAM IC Capacity 16 Gb 12 Gb
    Number of DRAM ICs 4
    Package Capacity 64 Gb (8 GB) 48 Gb (6 GB)
    Data Rate 4266 MT/s 3733 MT/s
    Bus Width x64
    Bandwidth 34.1 GB/s 29.8 GB/s 29.8 GB/s
    Package FBGA FBGA-376 FBGA-366 FBGA-376
    Dimensions 12 mm × 12.7 mm
    Voltages 1.8V / 1.1V / 0.6V
    Process Technology 21 nm
    Availability 2017
    SK Hynix did not announce exact power consumption figures for its LP4X parts, but confirmed that the reduction of I/O voltage by 45% reduces power consumption of the whole memory sub-system by around 20% versus a hypothetical LPDDR4 memory sub-system running at the frequency in the same conditions. This is not exactly a good description because SK Hynix’s LPDDR4 offerings top at 3733 MT/s. Assuming that the manufacturer did not optimize the design of its LPDDR4X DRAM arrays to reduce power consumption, but only reduced VDDQ to 0.6 V, a memory sub-system based on the new 8 GB LP4X-4266 part should consume less than a similar sub-system running the company’s 8 GB LP4-3733 stack, but the exact figure is unknown.
    To make its 16 Gb LPDDR4X memory ICs, SK Hynix uses its 21 nm fabrication process, which is also used to manufacture 16 Gb LPDDR4 ICs. So, from manufacturing technology standpoint, SK Hynix’s LP4X chips are similar to its LP4 chips.
    Initially, SK Hynix will offer only 8 GB LPDDR4X packages with 4266 MT/s data transfer rate based on its 16 Gb DRAM ICs. Eventually, the company intends to expand the lineup with 6 GB/8 GB LPDDR4X-3733 (these are already listed in the company's Q1 databook) and LPDDR4X-3200 solutions as well as parts based on 8 Gb LPDDR4X ICs (these are not listed in the official documents, but are mentioned in the company's official blog post). The latter make a lot of sense as far not all mobile are going to use 8 GB of DRAM this year. SK Hynix quotes researchers from IHS Markit, who believe that an average high-end smartphone this year is going to integrate 3.5 GB of memory on average (a mix of 3GB, 4GB, 6GB and 8GB solutions on Android). Meanwhile, keep in mind that DRAM requirements for Apple’s iOS and Google’s Android are different, which is why smartphones running the latter need more memory and handsets featuring 4 GB of Mobile DRAM are going to become mainstream in 2017. By contrast, Apple’s iPhone 7 and iPhone 7 Plus have 2 GB and 3 GB of DRAM, respectively.
    SK Hynix said that its 8 GB LPDDR4X-4266 packages are already in mass production. Mobile devices based on the new memory are expected to arrive in the coming months and it is highly likely that select manufacturers may demonstrate their MediaTek Helio P20- and LPDDR4X-based products at MWC next month.
    Related Reading:


  7. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: Intel Compute Card: A Universal Compute Form-Factor for Different Kinds of

    At CES 2017, Intel introduced a new form-factor for computing platforms in order to enable easy development, configuration, maintenance, repair and upgrade of various devices. Intel’s Compute Card is as small as a credit card, but, packs everything needed for computing, including the CPU, DRAM, storage, communications and I/O. The first cards are set to be introduced in mid-2017.
    Computing has become so ubiquitous nowadays that almost every more or less sophisticated piece of hardware has a microprocessor inside. Many such devices are designed to operate for years, but, since computer chips inside them get outdated, it is almost impossible to upgrade their functionality (e.g., add new security capabilities, speed up processing, etc.) without replacing the whole unit, or a significant part of it. Alternatively, if a CPU or a memory IC fails, repair of such a device may cost a lot in terms of money and effort. This translates to downtime and lost revenue. Such things happen because of various reasons, but the main two are proprietary platforms with tight integration and no upgrade path, or complex architectures that do not allow for a quick replacement of faulty components. The list of such devices includes everything from business PCs to point-of-sale kiosks and from smart TVs to commercial equipment.
    The Intel Compute Card has been designed to be a universal computing platform for different kinds of devices, including those that do not exist yet. The ultimate goal is to simplify the way companies develop equipment, use, maintain, repair, and upgrade it. Creators of actual devices have to design a standard Intel Compute Card slot into their product and then choose an Intel Compute Card that meets their requirements in terms of feature-set and price. For example, PC makers could create systems in all-in-one or clamshell form-factors and then use Compute Cards instead of motherboards. For corporate customers that need to provide a lot of flexibility (and, perhaps, solve some security concerns too) - every employee has a card and can switch between PCs. In other markets such as automated retail kiosks, vendors can easily provide upgrades to deliver better functionality as Intel releases new Compute Cards in the future.
    Intel Compute Card at Glance
    CPU Various Intel SoCs / SiPs, including Intel Core with vPro (up to 6W TDP)
    DRAM, NAND Integrated
    Cooling Fanless, but, I/O docks may have their own thermal design
    Dimensions 94.5 mm × 55 mm × 5 mm
    I/O Physical USB-C + Extension
    Logical USB, PCIe, HDMI, DP and additional signals
    Wireless Wi-Fi, Bluetooth
    Docking Integrated locking mechanism
    Launch Partners Dell, HP, Lenovo, Sharp and local companies
    Initial Availability Mid-2017
    From a technology standpoint, Intel’s Compute Card resembles the company’s Compute Stick PC. However, its purpose is much wider: it is a small device that packs an Intel SoC or SiP (including Kaby Lake-based Core processors with vPro and other technologies), DRAM, NAND flash storage, a wireless module and so on into a small enclosure. Nonetheless, there are a number of important differences between the Compute Card and the Compute Stick. The Compute Card is a sealed system with “flexible I/O” in the form of a USB Type-C and an extension connector. The “flexible I/O” is not Thunderbolt (obviously, due to power consumption concerns), but it handles USB, PCIe, HDMI, DisplayPort connectivity and has some extra pins for future/proprietary use.
    Intel is currently working with a number of partners to enable the Compute Cards ecosystem. The list of global players includes Dell, HP, Lenovo and Sharp. There are also regional partners interested in the new form-factor, including Seneca Data, InFocus, DTx, TabletKiosk and Pasuntech.
    At the moment, Intel and its partners are not discussing their Compute Card-based projects, which is understandable. Moreover, do not expect all these companies to release their Compute Card hardware simultaneously because different equipment has different design and validation cycles.
    Speaking of the form-factor itself, this is by far not the first form-factor the size of a credit card or something close (e.g., Mobile-ITX, Pico-ITX were announced, but never took off). However, this one seems easy to integrate and it is backed by Intel, which gives it credibility. There are a number of applications (and usage scenarios) that could take advantage of the Compute Card right away (e.g., corporate PCs, smart TVs, digital point-of-sales, emerging devices, etc.). However, there are also many embedded applications that just require uninterrupted operation without any need to upgrade. For those, traditional industrial PCs and boards will continue to be the mainstay, and the credit card form-factor will not bring any clear advantages.
    The other interesting aspect here is the future of the Compute Stick form factor. Given that the ARM-based HDMI sticks are not a popular form factor any more, it is not surprising that Intel has also not decided to update the Compute Stick lineup with Kaby Lake. Intel indicated that they would be evaluating the future of the Compute Stick in 2018, and decide if it warrants an update with the latest processors at that time. Our opinion is that the Compute Stick form factor has reached the end of its life, and it is for the Compute Card to carry on the miniaturization revolution. The Compute Card has much more flexibility in terms of the differentiation from the vendors' side, and it is not encumbered by an active cooling mechanism. Obviously, the ability to just plug the device into a HDMI port is not there, but, the Compute Card, by itself, is light enough to just hang directly off a display's HDMI port. Therefore, it is possible that some vendors can deliver a Compute Stick-like platform with the Compute Card also (albeit, with a slightly different form factor).
    We expect to hear more about Compute Card related projects in Q3, either during Computex or Intel's Developer Forum.


  8. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: GIGABYTE Exhibits an Aquantia AQC107 based 10G Ethernet PCIe Card

    During December, Aquantia announced that it will be launching two multi-gigabit NICs into the market, offering 2.5G/5G capability on both and one with 10GBase-T as well. We were told that industry partners would announce solutions with the chips in due course, and on Kaby Lake launch day we saw ASRock announce a pair of motherboards to come with the chips. GIGABYTE is also joining the fray, with a PCIe card to be potentially bundled with future motherboards or offered as a standalone product.
    The GIGABYTE solution is a PCIe 3.0 card featuring a single 10G port, which allows for half-height and full-height PCIe slots. Only an AQC107 version was at the show for 10G, and it wasn’t clear if a 2.5G/5G version using the AQC108 would be inbound, but at this point in time GIGABYTE is keeping its cards close to its chest.
    Aside from showing it exists, not much else was given – if it will be sold standalone, or what the extra price will be. The interesting thing for us to determine is the BOM cost (bill of materials) for the Aquantia chips – Aquantia has mentioned that they want to undercut 10G solutions significantly, and help drive multi-gigabit ethernet to both the PC and the backhaul of a home or business network. Having PCIe cards you can slot in certainly helps, and we mentioned to GIGABYTE that if this card hits the market in the $80-$100 range for a single port, that would help (and any cheaper would mean it will fly off the shelves).
    A big question with multi-gigabit ethernet, especially 2.5G/5G, is the availability of consumer-grade switches and hubs. We might have to wait another 12-18 months for those to come through, and again, pricing is a concern here. Aquantia has said they are working with the major players in that space, but it will be up to them to announce products.
    I’ve told GIGABYTE that when the cards are available, I will take a few for testing. I’m slowly building up a sizable stack of 10GBase-T controllers, and we might start looking into relevant networking tests for them for comparisons. Any suggestions, please let us know.
    Gallery: GIGABYTE Exhibits an Aquantia AQC107 based 10G Ethernet PCIe Card

    Related Reading:


  9. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: CES 2017: GIGABYTE Shows Passive Apollo Lake BRIX in Embedded UCFF

    Despite the candid approach that Intel has had to its Apollo Lake platform, we are now starting to see fully developed platforms coming out to play from the major vendors. (I personally have some APL motherboards for desktops in for testing sometime during Q1 as well.) One of these platforms was GIGABYTE’s embedded BRIX line, more focused at commercial deployments than consumer, but interesting nonetheless.
    The main unit we saw was this: the GB-EAPD-4200 BRIX. Using an Apollo Lake SoC, it looks to be in a passive case and sports three WiFi antenna. The chassis is 0.46L, with support for 2x DDR3L-1866 SO-DIMM memory modules, one M.2 (2280) SSD slot and a mini-PCIe slot for a 3G WLAN module. The WLAN accounts for one antenna, and the other two come via an Intel 802.11ac card inside, most likely the 8265 or 8260 depending on how GBT’s design cycle was in play. Using Apollo Lake means getting HD Graphics 505, although given this system it is more than adequate for video playback or casual office use.
    This unit has plenty of IO to keep the buyer satisfied – dual gigabit network ports, dual HDMI 1.4 display ports, four USB 3.0 ports and support for a microSD card. VESA support for 75mm and 100mm brackets is included.\
    This unit is to be sold as a barebones kit, requiring DRAM and storage, most likely for system integrators to add their own hardware solutions depending on how local customers will want them configured (small DRAM counts and low storage, or something more robust). Interested parties will need to contact their local distribution partners for more information.
    Gallery: CES 2017: GIGABYTE Shows Passive Apollo Lake BRIX in Embedded UCFF


  10. RSS Bot FEED's Avatar
    Join Date
    Post Thanks / Like

    Anandtech: HTC Announces New Phones For U: HTC U Play and HTC U Ultra

    HTC added two new smartphones to its lineup today: the HTC U Play and HTC U Ultra. Both phones use an aluminum frame paired with a curved glass back, an unusual choice of materials for HTC, which has a reputation for creating nice-looking, aluminum unibody phones. Underneath the rear glass are layers of colored minerals that reflect light in interesting ways and add visual depth, creating what HTC calls a “liquid surface” that gives the phones an iridescent sheen. Each phone comes in four different colors: Brilliant Black, Cosmetic Pink, Ice White, and Sapphire Blue.
    Gallery: HTC U Ultra: Colors

    Both phones have a single-piece volume rocker and highly textured power button on the right edge. There’s also a single downward-firing speaker and a microphone on the bottom edge. Unlike most phones that use a symmetrical pattern of slits to conceal these components, the HTC U phones just use a small circular hole for the microphone. Neither phone includes a 3.5mm headphone jack, instead using the USB Type-C port for audio.
    The smaller U Play comes with a 5.2-inch 1080p (423PPI) IPS LCD display that’s covered with Gorilla Glass. The more expensive U Ultra includes HTC’s “Dual Display” that combines a larger 5.7-inch 1440p (515PPI) IPS LCD with a secondary 2.0-inch 160x1040 IPS color LCD in the upper bezel, similar to LG’s V20. This secondary screen displays contextual shortcuts to favorite contacts or apps, reminders, and notifications among other things.
    The HTC U Play has a 16MP rear camera with phase detect autofocus (PDAF) and optical image stabilization (OIS) paired with an f/2.0 lens that has a wide-angle 28mm equivalent focal length. The circular camera module sits proud of the rear glass and has a circular dual-tone LED flash offset to one side. Around front is a 16MP selfie camera that uses an UltraPixel branded sensor with larger pixels to boost low-light performance. It also has an f/2.0 lens and a wide-angle 28mm equivalent focal length that’s useful for squeezing more people into a group shot. Both the front and rear cameras have an automatic HDR mode that improves dynamic range for scenes with dark shadows and bright lights without user intervention.
    The HTC U Ultra’s front-facing camera is similar to the U Play’s, but the rear camera uses a 12MP sensor with larger 1.55µm pixels. While the specifications are similar to the HTC 10’s rear camera, which generally produces high-quality images and uses the Sony IMX377 Exmor R sensor, the U Ultra is likely using the Sony IMX378 Exmor RS sensor that’s found in Google’s Pixel phones, because it includes an upgraded hybrid autofocus system that combines PDAF with laser AF, which the IMX377 sensor does not support, to improve performance over a broader range of lighting conditions. The U Ultra’s rear camera has OIS to help remove hand shake during long-exposure shots and is paired with a larger aperture f/1.8 lens that should help it capture more light. The U Ultra’s rear camera module is square and sticks out farther than the U Play’s camera, which is likely a consequence of using a lens with a larger aperture.
    Both phones have a pill-shaped, capacitive-touch fingerprint scanner on the front that doubles as a home button, with capacitive navigation buttons to either side. The fingerprint sensor and buttons are not centered in the lower bezel, however. Instead, they sit very close to the lower edge, making them a little harder to use. While we have not seen inside these phones yet, this design concession is likely to make room for the display/touch controller(s). Other OEMs have found ways to relocate these components to reduce bezel size or make room for navigation buttons, but HTC does not appear to be using those methods here.
    HTC U Play HTC U Ultra
    SoC MediaTek Helio P10

    4x Cortex-A53 @ 2.0GHz
    4x Cortex-A53 @ 1.1GHz
    Qualcomm Snapdragon 821
    (MSM8996 Pro AB)

    2x Kryo @ 2.15GHz
    2x Kryo @ 1.59GHz
    Adreno 530
    NAND 32GB / 64GB (eMMC 5.1)
    + microSD (SDXC)
    64GB / 128GB (UFS 2.0)
    + microSD (SDXC)
    Display 5.2-inch 1920x1080 IPS LCD 5.7-inch 2560x1440 IPS LCD
    2.0-inch 160x1040 IPS LCD
    Dimensions 145.99 x 72.9 x 3.50-7.99 mm
    145 grams
    162.41 x 79.79 x 3.60-7.99 mm
    170 grams
    Modem MediaTek (Integrated)
    2G / 3G / 4G LTE (Category 6)

    Qualcomm X12 LTE (Integrated)
    2G / 3G / 4G LTE (Category 12)

    SIM Size 1x / 2x NanoSIM 1x / 2x NanoSIM
    Front Camera 16MP, UltraPixel, f/2.0, 28mm focal length, Auto HDR 16MP, UltraPixel, Auto HDR
    Rear Camera 16MP, f/2.0, 28mm focal length, PDAF, OIS, Auto HDR, dual-tone LED flash 12MP, 1.55µm pixels, f/1.8, PDAF + Laser AF, OIS, Auto HDR, dual-tone LED flash
    Battery 2500 mAh
    3000 mAh
    Wireless 802.11a/b/g/n/ac, BT 4.2, NFC, GPS/GNSS 802.11a/b/g/n/ac, BT 4.2, NFC, GPS/GNSS/Beidou
    Connectivity USB 2.0 Type-C USB 3.1 Type-C
    Launch OS Android 7.0 with HTC Sense Android 7.0 with HTC Sense
    The U Play’s MediaTek Helio P10 SoC includes eight Cortex-A53 CPUs and a Mali-T860MP2 GPU for limited gaming. The U Ultra steps up to a Snapdragon 821 SoC; however, like Google’s Pixel phones, it uses the same peak CPU frequencies as the lower-clocked Snapdragon 820. It’s a curious design choice, but it should not have a noticeable impact on everyday performance.
    I would definitely like to see larger batteries in both phones. Inside the U Play’s 3.50-7.99 mm thick chassis is a 2500 mAh battery. For comparison, the smaller 5.0-inch Google Pixel packs a 2770 mAh battery into a 7.3-8.5 mm thick chassis, and the 5.1-inch Samsung Galaxy S7 packs a 3000 mAh battery in its 7.9 mm thick chassis. It’s a similar story for the U Ultra, whose 3000 mAh battery inside a 3.60-7.99 mm thick chassis looks small compared to the 3450 mAh battery in the 0.5 mm thicker but smaller 5.5-inch Pixel XL, or the 3600 mAh battery in the smaller (5.5-inch) and thinner (7.7 mm) Galaxy S7 edge. The U Ultra does include Qualcomm’s Quick Charge 3.0 for rapid charging, while the U Play supports 5V/2A charging.
    The U Ultra has a few other upgrades over the U Play too. While it’s possible to use voice commands on both phones to answer calls, dismiss alarms, and send messages, for example, only the U Ultra can respond if it’s asleep with the screen locked, a feature likely enabled by Snapdragon 821’s low-power DSP. HTC’s BoomSound feature is also exclusive to the U Ultra, which uses the earpiece on the front of the phone as a tweeter and the downward-firing speaker for lower frequencies.
    The HTC U Play will be available in select markets around the globe in Q1 2017, with options for either 3GB or 4GB of RAM and 32GB or 64GB of internal storage. Meanwhile, the HTC U Ultra, which comes with either 64GB of internal storage and Gorilla Glass 5 covering the front or 128GB of storage and harder sapphire glass, is available for pre-order in the US January 12, exclusively at HTC did not reveal pricing information yet.


Thread Information

Users Browsing this Thread

There are currently 3 users browsing this thread. (0 members and 3 guests)

Tags for this Thread


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts