Samsung Galaxy Note 8 Specifications Leak, 6GB of RAM Expected

Samsung has already announced that the company will launch the Galaxy Note 8 smartphone on August 23 and several leaked renders have given a fairly good idea about the handset’s design. Now, the final specifications that will be offered by the smartphone have also been leaked and suggest that the upcoming Galaxy Note series flagship will pack an impressive 6GB of RAM.

After leaking the renders of Samsung Galaxy Note 8 earlier, Venture Beat’s Evan Blass aka @evleaks has now leaked the alleged final specifications of the smartphone. As per the report, Galaxy Note 8 will sport a 6.3-inch QHD (1440×2960 pixels) Infinity Display with an aspect ratio of 18:5:9. It is expected to be powered by Exynos 8895 SoC globally, but use Qualcomm Snapdragon 835 in the US, just like the Samsung Galaxy S8 and Galaxy S8+.

The highlight feature of the Samsung Galaxy Note 8, apart from its S Pen stylus, is expected to be its camera. The Samsung Galaxy Note 8 is expected to come with a dual camera setup with two 12-megapixel sensors. The primary wide-angle lens comes with an f/1.7 aperture and dual-pixel autofocus, while the secondary telephoto lens has an f/2.4 aperture and enables a 2x optical zoom, as per the report. Both the lenses are also said to offer optical image stabilisation as well.

As mentioned earlier, the Samsung Galaxy Note 8 will be packing 6GB of RAM, an improvement over Galaxy S8 models that were launched with 4GB of RAM. The Galaxy Note 8 has been tipped to come with 64GB of built-in storage, which will be further expandable via microSD card. The handset is expected to measure 162.5×74.6×8.5mm. Galaxy Note 8 has been tipped to house a 3300mAh battery that can be charged either wirelessly or through USB Type-C. The handset is expected to come with a a fingerprint scanner at back.

The Samsung Galaxy Note 8 has also been tipped to be offered in Midnight Black and Maple Gold colours initially but is expected to be launched in Orchid Grey and Deep Sea Blue colours later. To recall, Blass recently shared the renders of the smartphone in these colours recently. As per the report, the Galaxy Note 8 will cost around EUR 1,000 (roughly Rs. 75,400) in Europe when it will start to ship in September.

LG V30 Confirmed to Sport 6-Inch FullVision P-OLED QHD+ Display

 

LG is set to launch the much rumoured LG V30 smartphone soon, and new information about the smartphone keeps trickling in regularly. In fresh reports, LG has now confirmed that the company is making the switch to OLED in its upcoming flagship, which will sport a FullVision display – meaning really less bezel. Also, more spec information about the smartphone has been leaked revealing its display size and camera details.

LG has now confirmed that its next flagship, highly rumoured to be the LG V30, will switch to OLED with a QHD+ (1440 x 2880 pixels) resolution. This means more battery life and durability. The company also says that the plastic OLED aka P-OLED display tech will allow for curved edges on the sides, but the shared image showing the bottom part of the smartphone suggests that it won’t copy Samsung’s Edge feature, and the display will only be slightly tapered.

The LG V30 display will take on the 18:9 FullVision aspect ratio, just like the LG G6, and the company says it will cover 109 percent of the DCI-P3 colour space and support HDR10 as well. LG also says that the upper and lower bezels have been reduced by 20 to 50 percent, despite the upcoming flagship being lower in overall form size than last year’s LG V20. The logo, LG confirms, has also been moved to the back panel.

Commenting on the impending launch, LG Electronics Mobile Communications President Juno Cho said, “Expertise in OLED has long been a core competency of LG, and the technology has always been seen as a potential value-add for smartphones. With competition in the global smartphone space fiercer now than ever, we felt that this was the right time to reintroduce OLED displays in our mobile products.”

Android Authority also shared some exclusive information about the LG V30 smartphone, particularly about the second screen seen on the predecessors. It claims that the secondary display is being ditched, in favour of a new ‘floating bar’. It essentially will provide quick access to notifications, shortcuts, and other things; however details on how it will look and work were scarce. The report also states that the LG V30 may sport Gorilla Glass 5 protection, and Daydream support. As per the report, the camera on the LG V30 will have an f/1.6 aperture, Crystal Clear Glass Lens, and “improved transmittance”. It will also have an improved audio experience, military standard protection, IP68 water resistance, and better cooling management.

Separately, an outline sketch of the LG V30 has also been leaked recently. In the image, claimed to be extracted from the user manual, the smartphone is set to have a bezel-less design, no home button, a horizontal dual camera setup at the back located in the centre, a fingerprint sensor right below it, and the LG logo at the bottom.

Previous reports suggest that the LG V30 may be offered in 32GB, 64GB, and 128GB storage capacities, and the dimensions could measure at 151.4×75.2×7.4mm. The smartphone has been tipped to be powered by Qualcomm Snapdragon 835 SoC and house a 3200mAh battery.

The LG V30 smartphone is expected to be announced on August 31 with US pre-orders set to begin on September 17, and the release date to be September 28.

Samsung Flip Phone SM-G9298 With Dual Displays Launched in China

 

Samsung launched its W2017 flip phone in China last year and now the South Korean company has introduced its successor – Samsung SM-G9298 – for the same market. Notably, the Samsung G-9298, aka Leader 8 aka Leadership 8 (translated from Chinese), comes with a more powerful processor and improved optics over the W2017 and has been made available only in Black colour for purchase.

The new dual-SIM (hybrid) Samsung SM-G9298 comes with two 4.2-inch full-HD (1080×1920 pixels) Super AMOLED displays – with one on the inside and one on the outside. The smartphone is powered by a quad-core Snapdragon 821 processor with two cores clocked at 2.15GHz and the other two clocked at 1.6GHz. The Samsung SM-G9298 packs 4GB of RAM.

In terms of optics, the Samsung SM-G9298 packs a 12-megapixel rear camera with f/1.7 aperture and a 5-megapixel front camera with f/1.9 aperture for taking selfies. It comes with 64GB of built-in storage, which is further expandable via microSD card up to 256GB.

The connectivity options offered by the Samsung SM-G9298 include 4G connectivity, micro USB, USB 2.0, Bluetooth v4.1, NFC, Wi-Fi a/b/g/n/ac, and GPS. It houses a 2300mAh battery that is rated to provide standby time of 68 hours. The Samsung G-9298 measures 130.2×62.6×15.9mm and weighs 235 grams. Other features offered by the smartphone include Samsung Pay, S Voice, and Secure Folder.

The onboard sensors on the Samsung SM-G9298 include accelerometer, barometer, a fingerprint sensor, and gyroscope. While Samsung has not announced the pricing of the smartphone, it will be made available with China Mobile in the country.

Lego Boost Review: The Best Robot Kit for Kids

Toys that teach kids to code are as hot in 2017 as Cabbage Patch Kids were in 1983, and for good reason. For today’s generation of children, learning how to program is even more important than studying a second language. Though there are many robot kits on the market that are designed for this purpose, Lego Boost is the best tech-learning tool we’ve seen for kids. Priced at a very reasonable $159, Boost provides the pieces to build five different robots, along with an entertaining app that turns learning into a game that even preliterate children can master.

Boost comes with a whopping 847 different Lego bricks, along with one motor (which also serves as a dial control on some projects), one light/IR sensor and the Move Hub, a large white and gray brick with two built-in motors that serves as the central processing unit for the robot. The Hub connects to your tablet via Bluetooth, to receive your programming code, and to the other two electronic components via wires.

You can build five different robots with the kit: a humanoid robot named Vernie, Frankie the Cat, the Guitar 4000 (which plays real music), a forklift called the “M.I.R. 4” and a robotic “Auto Builder” car factory. Lego said that it expects most users to start with Vernie, who looks like a cross between film robots Johnny No. 5 and Wall-E and offers the most functionality.

To get started building and coding, kids have to download the Boost app to their iPad or Android tablets. You’ll need to have the app running and connected to the Move hub every time you use the robot. All of the processing and programming takes place on your mobile device, and the sound effects (music, the robot talking) will come out of your tablet’s speaker, not the robot itself.

Lego really understands how young children learn and has designed the perfect interface for them. The Boost app strikes a balance among simplicity, depth and fun. Boost is officially targeted at 7- to 12-year-olds, but the software is so intuitive and engaging that, within minutes of seeing the system, my 5-year-old was writing his own programs and begging me to extend his bedtime so he could discover more.

Neither the interface nor the block-based programming language contains any written words, so even children who can’t read can use every feature of the app. When you launch Boost, you’re first shown a cartoonish menu screen that looks like a room with all the different possible robots sitting in different spots. You just tap on the image of the robot you want to build or program, and you’re given a set of activities that begin with building the most basic parts of the project and coding them.

As you navigate through the Boost program, you need to complete the simplest levels within each robot section before you can unlock the more complicated ones. Any child who has played video games is familiar with and motivated by the concept of unlocking new features by successfully completing old ones. This level-based system turns the entire learning process into a game and also keeps kids from getting frustrated by trying advanced concepts before they’re ready.

Boost runs on modern iPads or Android devices that have at least a 1.4-GHz CPU, 1GB of RAM, Bluetooth LE, and Android 5.0 or above. (I also downloaded Boost to a smartphone, but the screen was so small that it was difficult to make out some of the diagrams.)

Unfortunately, Lego doesn’t plan to list the program in Amazon’s app store, which means you can’t easily use Boost with a Fire tablet, which is the top-selling tablet in the U.S. I was able to sideload Boost onto my son’s Fire 7 Kids Edition, but most users won’t have the wherewithal to do that. Lego makes its Mindstorm app available to Fire devices, so we hope the company will eventually see fit to do the same with Boost.

When you load the Boost app for the first time, you need to complete a simple project that involves making a small buggy before you can build any of the five robots. This initial build is pretty fast, because it involves only basic things like putting wheels onto the car, programming it to move forward and attaching a small fan in the back.

Like the robot projects that come after it, the buggy build is broken down into three separate challenges, each of which builds on the prior one. The first challenge involves building the buggy and programming it to roll forward. Subsequent challenges involve programming the vehicle’s infrared sensor and making the fan in the back move.

After you’ve completed all three buggy challenges, the five regular robots are unlocked. Each robot has several levels within it, each of which contains challenges that you must complete. For example, Vernie’s first level has three challenges that help you build him and use his basic functions, while the second level has you add a rocket launcher to his body and program him to shoot.

If a challenge includes building or adding blocks to a robot, it gives you step-by-step instructions that show you which blocks go where, and only after you’ve gone through these steps do you get to the programming portion.

When it’s time to code, the app shows animations of a finger dragging the coding blocks from a palette on the bottom of the screen up onto the canvas, placing them next to each other and hitting a play button to run the program. This lets the user know exactly what to do at every step, but also offers the ability to experiment by modifying the programs at the end of each challenge.

In Vernie’s case, each of the first-level challenges involve building part of his body. Lego Design Director Simon Kent explained to us that, because a full build can take hours, the company wants children to be able to start programming before they’re even finished. So, in the first challenge, you build the head and torso, then program him to move his neck, while in the later ones, you add his wheels and then his arms.

Like almost all child-coding apps, Boost uses a pictorial, block-based programming language that involves dragging interlocking pieces together, rather than keying in text. However, unlike some programming kits we’ve seen, which require you to read text on the blocks to find out what they do, Boost’s system is completely icon-based, making it ideal for children who can’t read (or can’t read very well) yet.

For example, instead of seeing a block that says, “Move Forward” or “Turn right 90 degrees,” you see blocks with arrows on them. All of the available blocks are located on a palette at the bottom of the screen; you drag them up onto the canvas and lock them together to write programs.

Some of the icons on the blocks are less intuitive than an arrow or a play button, but Boost shows you (with an animation) exactly which blocks you need in order to complete each challenge. It then lets you experiment with additional blocks to see what they do.

What makes the app such a great learning tool is that it really encourages and rewards discovery. In one of the first Vernie lessons, there were several blocks with icons showing the robot’s head at different angles. My son was eager to drag each one into a program to see exactly what it did (most turned the neck).

Programs can begin with either a play button, which just means “start this action” or a condition such as shaking Vernie’s hand or putting an object in front of the robot’s infrared sensor. You can launch a program, either by tapping on its play/condition button or on the play button in the upper right corner of the screen, which runs every program you have on screen at once.

Because the programs are mostly so simple, there are many reasons why you might want to have several running at once. For example, when my son was programming for the guitar robot, he had a program that played a sound when the slider on the neck passed over the red tiles, another one for when it passed over the green tiles and yet another for the blue tiles. In a complex adult program, these would be handled by an if/then statement, but in Boost, there are few loops (you can use them in the Creative Canvas free-play mode if you want), so making several separate programs is necessary.

While the program(s) run, each block lights up as it executes, so you know exactly what’s going on at any time. You can even add and remove blocks, and the programs will keep on executing. I wish all the adult programming tools I use at work had these features!

Though you write programs as part of each the challenges, if you really want to get creative, you need to head to the Coding Canvas mode. In each robot’s menu, to the right of the levels, there’s a red toolbox that you can tap on to write your own custom programs. As you complete different challenges that feature new functions, your Coding Canvas toolbox gets filled up with more code blocks that you can use.

My son had an absolute blast using the Guitar 4000’s toolbox mode to write a program in which moving the slider over the different colors on the guitar neck would play different clips of his voice.

Users who want to build their own custom robots and program them can head over to the Creative Canvas free-play mode by tapping on the open-window picture on the main menu. There, you can create new programs with blocks that control exactly what the Move Hub, IR sensor and motor do. So, rather than showing an icon with a block of a guitar playing like it does from within the Guitar 4000 menus, Boost shows a block with a speaker on it, because you can choose any type of sound from your custom robot.

In both Creative Canvas and Coding Canvas modes, Lego makes it easy to save your custom programs. The software automatically assigns names (which, coincidentally, are the names of famous Lego characters) and colorful icons to each of your programs for you, but children who can read and type are free to alter the names. All changes to programs are autosaved, so you never have to worry about losing your work.

As you might expect from Lego, Boost offers a best-in-class building experience with near-infinite expandability and customization. The kit comes with 847 Lego pieces, which include a combination of traditional-style bricks, with their knobs and grooves, and Technics-style bricks that use holes and plugs.

The building process for any of the Boost robots (Vernie, Frankie the Cat, M.I.R. 4, Guitar 4000 and Auto Builder) is lengthy but very straightforward. During testing, we built both Vernie and the Guitar 4000 robots, and each took around 2 hours for adults to complete. Younger kids, who have less patience and worse hand-eye coordination, will probably need help from an adult or older child, but building these bots provides a great opportunity for parent/child bonding time. My 5 year old (2 years below the recommended age) and I had a lot of fun putting the guitar together.

As part of the first challenge (or first several challenges), the app gives you a set of step-by-step instructions that show which bricks to put where. The illustrated instruction screens are very detailed and look identical to the paper Lego instructions you may have seen on any of the company’s kits. I just wish that the app made these illustrations 3D so one could rotate them and see the build from different angles like you can on UBTech’s Jimu Robots kit app.

All of the bricks connect together seamlessly and will work with any other bricks you already own. You could also easily customize one of the five recommended Boost robots with your own bricks. Imagine adorning Varney’s body with pieces from a Star Wars set or letting your Batman minifig ride on the MIR 4 forklift.

I really love the sky-blue, orange and gray color scheme Lego chose for the bricks that come with Boost, because it has an aesthetic that looks both high-tech and fun. From the orange wings on the Guitar 4000 robot to Vernie’s funky eyebrows, everything about the blocks screams “fun” and “inviting.”

At $159, the Lego Boost offers more for the money than any of the other robot kits we’ve reviewed, but it’s definitely designed for younger children who are new to programming. Older children or those who’ve used Boost for a while can graduate to Lego’s own Mindstorm EV3 kits, which start at $349 and use their own block-based coding language.

Starting at $129, UBTech’s line of Jimu robots offer a few more sensors and motors than Boost, along with a more complex programming language, but they definitely target older and more experienced kids, and to get a kit that makes more than one or two robots, you need to spend over $300. Sony’s Koov kit is also a good choice for older and more tech-savvy children, but it’s also way more expensive than Boost (starts at $199, but you need to spend at least $349 to get most features), and its set of blocks is much less versatile than Legos.

Tenka Labs’ Circuit Cubes start at just $59 and provide a series of lights and motors that come with Lego-compatible bricks, but these kits teach electronics skills, not programming.

The best robot/STEM kit we’ve seen for younger children, Lego Boost provides turns coding into a game that’s so much fun your kids won’t even know that they’re gaining valuable skills. Because it uses real Legos, Boost also invites a lot of creativity and replayability, and at $159, it’s practically a steal.

It’s a shame that millions of kids who use Amazon Fire tablets are left out of the Boost party, but hopefully, Lego will rectify this problem in the near future. Parents of older children with more programming savvy might want to consider a more complex robot set such as Mindstorms or Koov, but if your kid is new to coding and has access to a compatible device, the Boost is a must-buy.

3D-Printed “Earable” Sensor Monitors Vital Signs

Illustration: ACS Sensors

Fitness-tracking wristbands and bracelets have mostly been used to count steps and monitor heart rate and vital signs. Now engineers have made a 3D-printed sensor that can be worn on the ear to continuously track core body temperature for fitness and medical needs.

The “earable” also serves as a hearing aid. And it could be a platform for sensing several other vital signs, says University of California Berkeley electrical engineering and computer science professor Ali Javey.

Core body temperature is a basic indicator of health issues such as fever, insomnia, fatigue, metabolic functionality, and depression. Measuring it continuously is critical for infants, elderly and those with severe conditions, says Javey. But wearable sensors available today in the form of wristbands and soft patches monitor skin temperature, which can change with the environment and is usually different from body temperature.

Body temperature can be measured using invasive oral or rectal readings. Ear thermometers measure infrared energy emitted from the eardrum and are easier to use than more invasive devices. That’s the route Javey and his colleagues took for their earable sensor, reported in the journal ACS Sensors.

For a customized fit to an individual’s ear, the team printed their sensor using flexible materials and a 3D printer. First they printed a gauzy, disc-shaped base using a stretchable polymer. This base contains tiny channels into which the researchers inject liquid metal to make electrical interconnects in lieu of metal wires. It also has grooves for an infrared sensor; microprocessors; and a Bluetooth module that transmits temperature readings to a smartphone app. They packaged the gadget in a 3D-printed case.

Because the device covers the ear, it could affect hearing, Javey says. So the engineers also embedded a bone-conduction hearing aid, made of a microphone; data-processing circuitry; a potentiometer for adjusting volume; and an actuator. The actuator sits by the temple and converts sound to vibrations, which are transmitted through the skull bone to the inner ear.

The earable accurately measured the core body temperature of volunteers wearing it in rooms heated or cooled to various temperatures, and while exercising on a stationary bicycle.

“It can be worn continuously for around 12 hours without recharging,” he says. “In the future, power can be further reduced by using lower power electronic components, including the Bluetooth module.”

The researchers plan to increase the device’s functionality by integrating sensors for measuring EEG, heart rate, and blood oxygen level. They also plan to test it in various environments.

A Revealing Leap Into Avegant’s Magical Mixed-Reality World

IEEE Spectrum Senior Editor Tekla Perry, wearing a prototype light field display, is enthralled by a sea turtle swimming on the palm of her hand, observed using a prototype of Avegant's mixed reality technology

I’m generally not the person you want testing your virtual, augmented, or otherwise “enhanced” reality technology. I am horribly susceptible to motion sickness, my presbyopia makes focusing on Google glass–like displays pretty much impossible, and even 3D movies do not make my eyes happy. Using a good virtual reality system, I can go maybe 30 seconds before I have escape to the real world; with a phone-based system, even a couple of seconds is too much.

But last week I spent at least 15 minutes (though it felt like less than five) completely engaged in a sampling of virtual worlds seen through Avegant’s mixed reality viewer. The experience was magical, enthralling, amazing, wonderful—pick your superlative. I didn’t get nauseous, or headachy, or feel any eyestrain at all. Indeed my eyes felt rested (probably because that was 15 minutes not spent in front of a computer or phone screen). Also a wonderful part of the experience: the fact that the company didn’t bother with extreme security measures or nondisclosure agreements (though executives are not talking specific technical details until patent filings are complete.

Avegant is a four-year-old Belmont, Calif., based startup. Its first product, the Glyph head-mounted display typically used for personal entertainment viewing, has been shipping since February of last year. (The name is a mashup of the names of the founders—Edward Tang and Allan Evans.)

The company announced its transparent Light Field Display technology last month. It hasn’t said when this will be ready for manufacture, though Tang points out that the Glyph’s success shows that the company knows how to design products for manufacture and bring them to market.

Avegant’s prototype mixed reality system uses a headband to position the Avegant display. It is driven by an IBM Windows PC with an Intel i7 processor and an Nvidia graphics card running the Unity game engine.

The images, explained cofounder Tang, now chief technology officer, are projected onto the retina by an array of MEMS micromirrors, each of which controls one pixel.

That, so far, is the same as the company’s Glyph system. But unlike a standard micromirror display, which reflects light straight at the person viewing it, these light field images are projected at different angles, mimicking the way light in the real world reflects off objects to hit a person’s eyes. The difference in these angles is particularly dramatic the closer someone is to the object, creating distinct and separate focal planes; the eye naturally refocuses when it moves from one plane to another.

To avoid having the eyes deal with these multiple focal planes, explained Tang, mixed reality systems like Microsoft’s HoloLens tend to keep viewers a meter or two away from objects. Light field technology, however, can use different focal planes for different objects simultaneously, so the user perceives even very close-up objects to be realistic. (Tang makes the case for light field technology in the video below.)

To date, Tang says, most attempts to bring light field technology into head-mounted displays have involved tricky-to-manufacture technology like deformable mirrors or liquid lenses, or approaches that take huge amounts of computing power to operate, like stacked LCDs.

“We created a new method,” he said, “that has no mechanical parts and uses existing manufacturing capabilities, with a level of computation that isn’t particularly high; it can run on standard PCs with graphics cards or mobile chipsets.”

The effect is designed to be natural—that is, you see virtual objects in the same way you normally see real objects, with no eye strain caused from struggling to focus. And, in the demo I was shown, it absolutely was.

I went through two mixed reality experiences in a slightly dim but not dark room with some basic furniture. The room was rigged with off-the-shelf motion tracking cameras to help map my position; the headset I wore was tethered to a PC. After a short calibration effort that allowed me to adjust the display to match the distance between my pupils, I entered a solar system visualization, walking among planets, peering up close at particular features (Earth seemed to be a little smaller than my head in this demo), and leaning even closer to trigger the playing of related audio.

Clear labels hovered near each planet, which brings up an interesting side note: I wasn’t wearing my reading glasses, but the labels, even close at hand, were quite clear. Tang mentioned that the developers have been discussing whether, for those of us who do need reading glasses, it would be more realistic to make the virtual objects as blurry as the real ones. I vote no, I didn’t find it jarring that my hand as I used it to reach for planets was a little fuzzy, particularly, perhaps, since the virtual objects were appearing brighter than real world ones. And it was quite lovely having so much of what I was seeing be clear.

At one point in the demo, while I was checking out asteroids near Saturn, Tang suggested that I step into the asteroid belt. I was a bit apprehensive; with my VR sickness history, it seemed that watching a flow of asteroids whizzing by me on both sides would be a uniquely bad idea, but it went just fine, and I could observe quite a bit of detail in the asteroids as they flowed past me.

The second demo involved a virtual fish tank. Tang asked me to walk over to a coffee table and look down at the surface; the fish tank then appeared, sitting on top of the table. I squatted next to the tank and put my hand into it. I reached out for a sea turtle; it was just the right size to fit in my palm. I followed it with my cupped hand for a while, and started feeling a whoosh of air across my palm whenever it swept its flippers back. I wondered for a moment if there was some virtual touch gear around, but it turned out to just be my mind filling in a few blanks in the very real scene. Tang then expanded the fish tank to fill the room; now that sea turtle was too big to hold, but I couldn’t resist trying to pet it. Then, he told me, “Check out that chair,” and in a moment, a school of tiny fish swept out from under the chair legs and swooped around the nearby furniture.

After convincing me to leave the fish demo (I was enjoying the experience of snorkeling without getting wet), Tang directed me to walk towards a female avatar. She was a computer-generated human that didn’t quite leave the uncanny valley—just a standard videogame avatar downloaded from a library, Tang said. But he pointed out that I could move up and invade her personal space and watch her expression change. And it certainly did seem that this avatar was in the room with me.

Throughout all the demos, I didn’t encounter any vision issues, focus struggles, or other discomfort as I looked back and forth between near and far and real and virtual objects.

I have not been one of the anointed few who have tested Magic Leap’s much-ballyhooed light-field-based mixed reality technology (and given the company’s extreme nondisclosure agreements, I likely couldn’t say much about it if I had). So, I don’t know how Avegant’s approach compares, though I’d be willing to put Avegant’s turtle up against Magic Leap’s elephant any day.

 What I do know is that it absolutely blew me away. I’m eager to see what developers eventually do with it, and I’m thrilled that I no longer have to struggle physically to visit virtual worlds.

Facebook Is Going All In on Augmented Reality

Facebook's Mark Zuckerberg focuses on augmented reality and camera apps at Facebook's F8 conference

Have you noticed that most Facebook apps these days have a camera button built in? Well, says Facebook CEO Mark Zuckerberg, now it’s time to use those buttons to turn on augmented reality for just about everything you’re doing in Facebook’s world.

“We are making the camera the first augmented reality platform,” Zuckerberg said, kicking off Facebook’s F8 developer conference in San Jose this morning. “I used to think glasses would be the first mainstream augmented reality platform,” he said. But he’s changed his mind.

By “camera,” Zuckerberg really means the camera button (which allows users to directly access a mobile device’s actual camera) and related photo processing tools in Facebook and related apps. Now, Zuckerberg said, Facebook is going to roll out tools to allow developers to create augmented reality experiences that can be reached through that photo feature. These tools will include precise location mapping, creation of 3D objects from 2D images, and object recognition.

Developers, he expects, will be able to apply these tools to generate virtual images that appear to interact directly with the real environment. For example, fish will swim around on your kitchen table and appear to go behind your real cereal bowl, virtual flowers will blossom on a real plant, virtual steam will come out of a real coffee mug, or a virtual companion’s mug will appear next to yours on your table in order to make your breakfast routine feel a little less lonely. Augmented reality will also allow users to leave notes for friends in specific locations—say, at a table in a particular restaurant—or let them view pop-up labels tagged to real world objects.

“Augmented reality will let us mix the digital and the physical,” Zuckerberg said in his keynote address to 4000 developers, “and that will make our physical reality better.”

Zuckerberg also predicted the advent of augmented reality street art, and suggested that as technology makes people working in traditional jobs more productive, more and more people will contribute to society through the arts.

Zuckerberg said that it will take a while to roll some of these experiences out into the world, but developers can get started now, with a closed Beta version of its AR Studio software now launching. Also available to users beginning today: a limited library of augmented effects.

Google Pixel Phones Target Apple, but May Hurt Samsung

Google’s product launch on Tuesday was as much a jab at Apple’s iPhone as a sales pitch for its new Pixel phones, with executives from the Mountain View internet search company taking shots at their competitor at every turn.

But any gains Google makes with the $649 (roughly Rs. 43,000) Pixel, billed as completely designed in-house, may come not at the expense of Apple, but phone manufacturers running its Android software, a list topped by Samsung.

“A premium Android strategy is really a strategy to take market share from Samsung,” said analyst Jan Dawson of Jackdaw Research. The South Korean company already is reeling from a highly publicized recall of its Galaxy Note 7 phones due to battery fires.

“Obviously Google doesn’t want to explicitly compete with its own partners, but this product is much more likely to compete with Samsung than Apple,” Dawson said.

Google Pixel Phones Target Apple, but May Hurt Samsung

Google, a unit of Alphabet Inc, clearly has its sights set on the iPhone and the luxury consumer base that it commands.

“There’s no unsightly camera bump,” hardware chief Rick Osterloh said to laughter from the audience at the phone’s debut, alluding to the iPhone’s raised camera, a feature lamented by some design aficionados.

Newly released ads for the Pixel phones land some blows on the iPhone. A rundown of the phones’ new features concludes with “3.5mm headphone jack satisfyingly not new,” a reference to Apple’s decision to eliminate the port in the iPhone 7, which riled many customers.

Imitation is flattery
Nevertheless, the Pixel line bears a strong resemblance to the iPhone, coming in two sizes and a variety of sleek finishes. The Google Assistant, powered by artificial intelligence software, is a response to Apple’s Siri. And as Google prioritizes making its own hardware under Osterloh, its emerging design philosophy echoes Apple’s.

Hardware executive Mario Queiroz touted the company’s attention to packaging, a feature that the late Apple CEO Steve Jobs famously obsessed over.

“You want the consumer first of all to have this great experience out of the box in terms of the design of the packaging,” Queiroz, a vice president of product management at Google, said in an interview.

He brushed aside concerns that Google’s hardware push will pit it against its Android partners. The technology embedded in the Pixel phone is meant to propel Android devices forward, he said.

“It’s not a zero sum game,” Queiroz said. “We believe that Google can and will be doing both things. Both delivering platforms and building our own products.”

Google could find itself squaring off against two extremely deep-pocketed rivals. Apple and Samsung are the largest smartphone handset makers and both have major marketing programs.

Samsung spent at least $50 million (roughly Rs. 332 crores) just on advertising during the Olympics in Rio de Janeiro, Brazil, according to estimates from Kantar Media.

Spokespeople for Apple and Samsung did not respond to requests for comment on Google’s launch.

Google Pixel Event: Android Maker Readies New Phones, Gadgets Featuring Its Software

Google may be getting serious about selling its own hardware gadgets.

On Tuesday, the search giant will ramp up its consumer electronics strategy with expected announcements of new gadgets including new smartphones and an Internet-connected personal-assistant for the home similar to Amazon’s Echo speaker. All are intended to showcase Google’s software and online services.

A new virtual reality headset and other devices, such as a home router, could also be on tap, according to analysts and industry blogs. Google has declined to confirm any specifics, although it previously described some of these products back in May.

Google makes most of its money from online software and digital ads. But it’s putting more emphasis on hardware as it faces rivals like Apple, Amazon and South Korea’s Samsung.

Google Pixel Event: Android Maker Readies New Phones, Gadgets Featuring Its Software

Hardware is hard
New devices could help Google keep its services front and center in the battle for consumers’ attention, said analyst Julie Ask at Forrester Research. Unlike a new mobile app or other software, she noted, it can be an expensive gamble to build and ship new hardware products. “But if you’re Google, you can’t afford to stop placing bets.”

Google already sells smartphones and tablets under the Nexus brand, which it launched in 2010 as a way to show off the best features of its Android software. But it’s spent relatively little effort to promote those devices, which have mostly ended up in the hands of Google purists. Tech blogs are reporting the company is now planning to launch two smartphone models under a new brand, Pixel, and Google has hinted it may invest in an extensive marketing campaign intended to introduce the phones to the mass market.

Android already powers the majority of smartphones sold around the world. But Samsung, the biggest maker of Android phones, has increasingly been adding more of its own software – even its own Samsung Pay mobile wallet – on the phones it sells. Another big rival, Apple, has built its own services, such as online maps and its own Siri personal assistant, to replace Google’s apps on the iPhone.

Home, but not alone
Google is also likely to begin selling a voice-activated “smart speaker” called Home, apparently modeled on Amazon’s Echo . Analysts are expecting Google will announce more details, including price and availability, at Tuesday’s event.

The “Home” device will feature Google’s digital “Assistant” service, a voice-activated personal butler that can search the Internet, play music or perform other useful tasks. “Assistant” is the company’s answer to similar concierge services from rivals, including Siri, Amazon’s Alexa and Microsoft’s Cortana. The leading tech companies are all competing to assist consumers in their online activities such as shopping, since that gives the companies a better chance of selling advertising or other services.

Home-based systems like the Echo are taking on more importance with the advent of improved voice technology, said Forrester’s Ask. “You can’t assume somebody is going to go sit down at a computer or pick up a phone and type in a question anymore,” she said.

Google may also provide a closer look Tuesday at some other products, including a new virtual-reality headset that it teased in May. Like the other devices, Google’s virtual reality system could be a platform for a wide range of games and applications that are built on Google’s software.

iPhone 7, iPhone 7 Plus Users Complain of Lightning EarPods Issue; Apple Promises Fix

iPhone 7, iPhone 7 Plus Users Complain of Lightning EarPods Issue; Apple Promises Fix

After reports of a bug that causes a loss in cellular service after disabling Airplane Mode on the new iPhone 7 and iPhone 7 Plus, some more glitches have reportedly been found on Apple’s latest offerings. This time the glitch pertains to the Lightning port, which appears to disable in-line controls on the connected headset after a period of no playback. Apple has acknowledged the issue, and says a fix will be issued via a software update.

Reported by several iPhone 7 and iPhone 7 Plus users across the Web, the bug affects both the bundled Lightning EarPods and third-party headsets that are connected via the Lightning-to-3.5mm Headphone Jack Adapter. The issue reportedly occurs when the smartphone’s display is off for five minutes, with the headset connected but not playing back audio during that time. After this point, audio playback will work, but users cannot adjust the volume, activate Siri, or answer calls using the in-line controls on the EarPods or third-party headsets.

Furthermore, the glitch is an on and off thing in that the issue is not persistent. Those experiencing the issue can remove and plug in the affected headset again. This is an easy, temporary fix but it doesn’t solve the underlying problem, which appears to be a software issue related to Lightning port power-saving features.

Apple has acknowledged the issue and is working on a fix that should be brought to users via a software update in the near future, an Apple representative confirmed to Business Insider.

The Cupertino giant controversially dropped the 3.5mm headphone jack with the launch of the iPhone 7 and iPhone 7 Plus in September, though Apple wasn’t the first to do so – though it claims it required “courage” to make the decision. Other major brands like Lenovo quietly did away with the headphone jack a month before with some models of the Moto Z.

The decision to drop the iconic headphone jack came from Apple’s need to free up space for newer technologies and to make use of the Lightning port for higher quality audio output. The dual-camera setup, Taptic engine for the pressure sensitive button, water resistance and a 14 percent bigger battery were all made possible by the removal of the 3.5mm port, according to Apple’s own claims. Of course, teething issues like these do not help making the decision to drop the 3.5mm headphone jack acceptable to customers.