The Ducati Multistrada 1260S is ready for anything
Bar Refaeli Malin Akerman Mila Kunis
Basal Traveller is the perfect carrying case for your coffee equipment Read More
NEWS – If you use a coffee maker like the Aeropress, Kalisa, Hario v60, or a small pour-over dripper, the Basal Filter Coffee Traveller is the perfect travel case for your coffee-making kit. In the image above, the case is holding an Aeropress, a manual grinder, coffee, and a mug. The interior of the lid has a sleeve to hold filters. All you’ll need is hot water.
The Traveller is made of waxed canvas and lined with ballistic nylon. It’s padded to protect your kit. Internal dividers let you configure the interior to fit your equipment for even more protection. It’s small and compact storage that fits into your daily bag or your carry-on bag for travel.
The Basal Filter Coffee Traveller is $45.00 from Amazon.
Filed in categories: News
Tagged: Coffee
Basal Traveller is the perfect carrying case for your coffee equipment originally appeared on The Gadgeteer on October 19, 2018 at 8:00 am.
Note: If you are subscribed to this feed through FeedBurner, please switch to our native feed URL http://the-gadgeteer.com/feed/ in order to ensure continuous delivery.
There’s a war brewing to become the cloud pharmacy for men’s health. Roman, which launched last year offering erectile dysfunctional medication and recently added a ‘quit smoking’ kit, is taking on $97 million-funded Hims for the hair loss market. Today, Roman launched four new products it hopes to cross-sell to users through a unified telemedicine subscription and pill delivery app. It now sells meds for premature ejaculation, oral herpes, genital herpes, and hair loss at what’s often a deep discount versus your local drug store. And for those who are too far gone, it’s launching a “Bald Is Beautiful, Too” microsite for finding the best razors, lotions, and head shaving tips.
Roman CEO Zachariah Reitano
“It’s unlikely that you’ll buy razors from Bonobos or pants from Dollar Shave Club. But with a doctor, it’s actually the exact opposite” Roman CEO Zachariah Reitano tells me. “As a customer you’re frustrated if they send you somewhere else.” And so what started as a single product startup is blossoming into a powerful product mix that can keep users loyal.
Roman starts with a telemedicine doctor’s visit where patients can talk about their health troubles without the embarrassment of going to their general practitioner. When appropriate, the doc can then prescribe medications customers can then instantly buy through Roman.
“If you have something that’s truly consuming your day-to-day, it makes it really hard or nearly impossible to think about the long-term. If you’re 30 pounds overweight and experiencing erectile dysfunction, [it’s the latter symptom] that’s dominating your head space” Reitano explains. The doctor might focus on the underlying health issue, but most humans aren’t so logical, and want the urgent issue fixed first. Reitano’s theory is that if it can treat someone’s erectile dysfunction or hair loss first, they’ll have the resolve to tackle bigger lifelong health challenges. “We’re hoping to work on this so you can take a deep breath and get the monkey off your back” the CEO tells me.
But one thing Roman won’t do is prescribe homeopathic remedies or spurious remedies. “We will only ever offer products that are backed by science and proven to work” Reitano declares. Taking a shot at Roman’s competitor, he says “Hims sells gummies. Roman does not. No doctor would say Biotin would help you regrow hair”, plus the vitamin can distort blood pressure readings that make it tough to tell if someone is having a heart attack.
“Roman will never slap sugar on vitamins, sell them on Snapchat, and say they’ll regrow your hair” Reitano jabs. Roman also benefits from the fact that Reitano’s father and one of the company’s advisors Dr. Michael Reitano was a lead author on a groundbreaking study about how Valacyclovir could be used to suppress transmission of genital herpes.
So what is Roman selling?
With Roman, Hims, Amazon acquisition PillPack, and more, there’s a powerful trend in direct-to-consumer medication emerging. Reitano sees it as the outcome of five intersecting facts.
Roman’s $88 million Series A it announced last month is proof of this growing trend. Investors see the traditional pharmacy structure as highly vulnerable to disruption.
Roman will have to defeat not just security threats and competitors, but also the status quo of keeping a stiff upper lip. A lot of men silently suffer these conditions rather than speak up. By speaking candidly about his own erectile dysfunction as a side-effect of heart medication, Reitano is trying to break the stigma and get more patients seeking help wherever feels right to them.
WellDesk XenStand laptop stand review Read More
REVIEW – Your mother was always telling you to sit up straight, but how do you do that when you’re slumped over a laptop all day long? One way to improve the ergonomics is to raise the display to eye level with a laptop stand like the XenStand from WellDesk. Let’s take a look.
The WellDesk XenStand is an adjustable laptop stand that is made in the USA of Baltic Birch Plywood and has the same DIY type of style as their dual monitor standing desk which reviewed a few months ago.
The XenStand has 2 support stands and 2 feet that are used to elevate the stand into higher positions.
The WellDesk XenStand is made of nicely finished wood that has sanded, smooth edges with no splinters to worry about. It has a simple interlocking design where the two main pieces fit together to form an X. The stand is solid and stable while still being able to quickly disassemble it if needed.
The back of the stand props up the display of the laptop while the hooks in the front of the stand prevent the laptop from sliding off the stand. I did all my testing of the WellDesk with my 12-inch MacBook, which is the smallest sized laptop that you would want to use with this stand. With the MacBook in place, the back of the laptop is raised approximately 4.25 inches.
In this position, the display is raised for more comfortable viewing. Of course, you will also need to use an external keyboard and mouse for correct ergonomics.
If the first configuration doesn’t raise the display high enough for you, there are two other configurations that you can try. By adding the included feet to the inner notches, the back of the display is raised even higher.
In this position, the back of the computer is raised 5.75 inches off the table.
Last but not least, you can add the feet to the rear notches and position the front edge of the laptop in the larger hooks. As you can see, this configuration will not work for my 12-inch MacBook because the display does not open far enough.
The WellDesk XenStand is a simple laptop stand lifts the display to eye level so that you can use a keyboard and mouse with your laptop to allow you to sit up straight to improve your posture and ergonomics. This stand is easy to assemble and disassemble and isn’t too bulky to take with you in the included drawstring canvas bag.
Price: $37.95 MSRP
Where to buy: Amazon
Source: The sample for this review was provided by Well Desk.
Filed in categories: Reviews
Tagged: Laptop stand
WellDesk XenStand laptop stand review originally appeared on The Gadgeteer on October 23, 2018 at 11:00 am.
Note: If you are subscribed to this feed through FeedBurner, please switch to our native feed URL http://the-gadgeteer.com/feed/ in order to ensure continuous delivery.
The Israeli cybersecurity venture studio Team8 has raised $85 million in new financing from a clutch of new and returning strategic investors including Walmart, Airbus, SoftBank, and Microsoft’s investment arm, M-12.
The studio’s plans to raise a larger fund were first reported by PEHub in May.
Team8 has long believed that by combining the strengths and security interests of strategic corporate partners it could develop better cybersecurity solutions (or companies) that would be attractive to its investors and clients.
Indeed, that was the thesis behind the $23 million that Team8 raised in 2016 when it was still proving out the model.
The company’s previous rounds of funding managed to bring Cisco Investments, Bessemer Venture Partners, Innovation Endeavors and Alcatel-Lucent into the fold. Now banks like Scotiabank and Barclays, ratings agencies like Moody’s, and insurers like Munich Re are coming on board to add their voices to the chorus of wants and needs that keep the crack cybersecurity experts from Team8 churning out new companies.
This model, of partnering with the corporate clients who will become the customers of the startups that Team8 creates isn’t confined to the security industry, but it’s where the idea has already created successful outcomes for all parties.
Earlier this month, Temasek (also a Team8 investor) acquired Sygnia, a company from the venture studio’s portfolio that had only emerged from stealth a year ago, for $250 million.
As we’d written at the time, Sygnia was typical of a Team8 investment. The company had only secured $4.3 million in funding and it was staffed by elite security specialists from Israel. Shachar Levy (who was the chief executive), Ariel Smoler, Arick Goomanovsky and Ami Kor, with its chairman Nadav Zafrir, the co-founder and CEO of Team8 and a former commander of Unit 8200.
Zafrir and Sachar are both full-time members of Team8 along with Israel Grimberg, Liran Grinberg, Assaf Mischari, a former technology leader in Unit 8200, and Lluís Pedragosa, former partner at Marker LLC.
The Tel Aviv-based company has invested in four companies that are currently selling their wares on the open market and has another four that are still operating in stealth mode. IN all, the group has raised $260 million to date, and employs 370 people around the world.
What is seemingly unprecedented is the level of cooperation among organizations with the Team8 organizations to identify threats and develop technologies that can respond to them.
According to a statement announcing the fund’s launch, companies investing into Team8 will be required to contribute insights from their Chief Information, Technology, Data and Security Officer to identify problems, develop solutions, and work on sales and marketing services for these new businesses.
“Rogue states, hackers, terrorists and criminals are intent on wreaking physical, financial and societal havoc and catastrophic damage on governments, corporations and individuals,” said Eric Schmidt, Founding Partner of Innovation Endeavors, a lead investor in Team8, in a statement. “As data continues to proliferate and our technical capabilities expand, cyber attacks and wars will increase in number and intensity.”
Vector of Internet Security Systems
Team8 investors are required to nominate a “senior champion” from their business unit in addition to the corporate venture capital or corporate development team, to guide the partnership and provide executive mindshare for the mutual work together.
As shared owners in Team8 companies, these investors are deeply invested in ensuring only the best ideas, technologies and companies are created. Besides meeting in person and as a group throughout the development process of new companies, strategic investors bring their chief executives to Israel as well as host Team8 and its portfolio companies for workshops at their headquarters for continuous knowledge-sharing and strategy building, according to a Team8 spokesperson.
And the company will be expanding its focus beyond just cyberdefense thanks to its latest funding and its new partners.
“Going forward, we will continue to focus on the enterprise, but not necessarily just defense,” a spokesperson for the company wrote in an email. “The indirect impact of cyber on the enterprises are the missed opportunities to experiment, integrate and onboard new technologies because of security, compliance and fear of exposure. We’re currently working on zero-trust networks for multi-cloud environments, secure on-ramping of blockchain, safe collaboration on sensitive data; and rethinking how machine learning can significantly impact the business. These are designed with built in security, data science, and intelligence, to allow companies to prosper and not be inhibited by security controls.”
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
Saddleback Leather Simple iPad case review Read More
REVIEW – Many moons ago I owned an original Saddleback Leather iPad case. It was a beautiful, beefy piece of leather. The new Simple iPad Case is a completely different animal. Simple and streamlined, it’s made for portability and optimal day-to-day usability. Fear not, however: you still get that gorgeous full-grain leather. To the review!
It’s a leather iPad case for your iPad, iPad Air, or iPad Air 2.
The Saddleback Leather “Simple” designs offer lighter products with simplified designs as an alternative to their beefier, full-featured gear. The new Simple iPad Case is great example of this. You get all the benefits and style associated with quality leather, in a streamlined and extremely functional package.
While it is a simple shell-style design, Saddleback fans have no need to fret about the quality and construction on this item. a 7″ by 10″ piece of full-grain leather forms the back panel, while two 1” by 7” strips of leather on the front face form a pocket into which you slip your iPad.
My tester is the newer black leather, backed by pigskin suede to protect the iPad. It’s a beautiful, thick leather that measures roughly 1/8″ including the suede backing. It is thinner than some of the older leather pieces I’ve had from Saddleback, but I think it’s just as tough. It’s also got a great pliant feel. I like it quite a bit.
You’ll also see some subtle embossed logo work on the back face in the Saddleback logo and the tribute to Blue, founder Dave Munson’s dog. They’re nice touches that are tastefully handled, adding some character to the piece.
Black has always been my favorite color in Saddleback products, and they do a quality job with the dye work here. The leather is dyed through the whole piece, so scratches don’t pull up an underlying color. I haven’t seen any dye rub-off on clothes or other gear. It’s marvelous and speaks to my artistic side.
Saddleback Leather deserves a lot of credit here for their commitment to leather as a medium. There are three materials used in the build of this case: leather, pigskin suede, and marine-grade thread to keep it all together. That’s it. Even the spacers/bumpers between the layers are made wholly from leather dyed to match the case:
It would be easier (and probably cheaper) to solve design problems in a case like this with additions like plastic tabs, foam padding, or elastic webbing. Not that those solutions are inherently bad, but I appreciate that Saddleback doesn’t go that route. It’s a truly unique leather-focused design solution that’s *just* a bit more special as a result.
This case is designed for the 9.7″ iPads, specifically the 2017 iPad and multiple 2018 models (the iPad 9.7, Air, Air 2, and 9.7 Pro). You’ll find an assortment of cuts to accommodate the whole range of speakers, ports, and buttons on these models. As we walk through the slots, note that mine is an iPad Air 2. Here’s the audio-in port and sleep/wake button port. It looks tight, but the leather is flexible enough to get your fingers in there for obstruction-free operation:
The same goes for the bottom ports for the lightning cable and speakers:
The right-panel rocker buttons have an additional slot cutout for easy access:
You’ll also find two cutouts for the rear camera to accommodate multiple model iterations:
When I first received the Simple iPad Case I was concerned by the fact that the front face and sides are completely exposed. After kicking it around a bit, however, I’m becoming more and more impressed with the design. There’s 1/4” of extra leather around all edges, so that thick leather absorbs side impacts to protect your tablet. It absolutely brings more protection than silicone shells like the Apple iPad cover, and only adds 8 ounces of weight to your kit if you’re including this in your daily carry.
Overall usability is excellent. There’s front flap or cover to mess with here. The entire screen edge-to-edge is accessible, with generous slots cut for the home button and front camera. Everything is easy to get to, and you don’t find yourself fumbling to get to the ports. It’s also wonderfully easy to handle, working perfectly with the iPad’s form factor. Nothing interferes with regular operation, and it feels solid in your hands. It slips easily in and out of your bag, with no protrusions or extra bits that could get caught on zippers. If you like your iPad covers lean & mean, this cover gets you there with the added protection and style of excellent leather.
The simple design does lack some capabilities found in more feature-rich cases. There’s no front cover for additional protection. It’s not compatible with the Apple keyboard. There’s also no integrated stand. If these are capabilities you are looking for, you’ll need to look elsewhere. If you’re looking for a great leather case that works well with the iPad’s natural form factor, however, the Simple iPad Case is worth putting on your shopping list.
If you’re looking for a well-built, straightforward iPad case with great looks, the Saddleback Leather Simple iPad Case is a great choice. The leather is fantastic, usability is excellent, and the simple form factor makes for easy handling and day-to-day use. You’ll also get Saddleback Leather’s famous 100-year warranty. This one is now in my EDC lineup, and I expect it to stay there for quite some time. Maybe not 100 years. We’ll see.
Price: $59.00
Where to buy: Saddleback
Source: The sample of this product was provided by Saddleback Leather.
Filed in categories: Reviews
Tagged: Cases and Covers, iPad
Saddleback Leather Simple iPad case review originally appeared on The Gadgeteer on October 21, 2018 at 9:42 am.
Note: If you are subscribed to this feed through FeedBurner, please switch to our native feed URL http://the-gadgeteer.com/feed/ in order to ensure continuous delivery.
Inateck 9 in 1 USB-C Hub review Read More
REVIEW – Just recently, I acquired access to a MacBook and I immediately saw the need to have a USB-C hub to be able to use all of my desired accessories and peripherals. This was especially true since the only ports on my MacBook Pro are 2 USB-C ports. A couple of weeks ago, I got the chance to test and review the Inateck 9 in 1 USB-C Hub. Here is a review of my experience.
The Inateck 9 in 1 USB-C Hub is a compact and lightweight hub that allows you to use one USB-C port on your MacBook/Laptop and expand it to accommodate just about every commonly used accessory/peripheral.
1 x Inateck 9 in 1 USB-C Hub
1 x Instruction manual
On the side of the hub shown below, from left to right, there is a USB Type C port, 2 USB 3.0 ports, a lower SD card reader and an upper micro SD card reader.
On the opposite side as shown below, from left to right there is a HDMI port, a VGA port, a gigabit ethernet port, and a 100W PD USB type C charging port.
At the bottom of the hub, there is a built-in USB-C cable that can be tucked away until you are ready to use it.
This hub performed well in every way. In the first picture below, I am showing the MacBook Pro USB-C AC power adapter connected to the pass-through USB-C power port on the hub. This allows you to charge/power the MacBook Pro while using the hub and also provides the USB charging port with power.
Below, I have a USB-C male to USB A female adapter connected to the USB-C data port on the hub. I then have a USB 2.4Ghz wireless dongle connected to the adapter that allows me to use my wireless mouse.
Here I have both my MacBook Pro and a monitor connected to the hub. I am mirroring the screen. The monitor is connected to the hub via the HDMI connection, and I also successfully connected the monitor via the VGA connection.
The hub performed without any access or operational issues for everything I tested. The only thing that I notice is that after each period of use, approximately 8 hours, the body of the hub is quite warm. I am not sure how the heat will affect the performance of the hub over time and after repetitive and continuous use.
I really like the 9 in 1 USB-C hub. It allows me to add just about every accessory that I need to my MacBook, and to use just about every data drive that I commonly use whether directly or via a USB card reader. While I have not tried it as yet, I feel confident that I can use this hub on my PC laptop via a USB-C female to USB A male adapter. This simple but effective device gets two thumbs from me!!
Price: $59.99
Where to buy: Amazon or Inateck Website
Source: The sample for this review was provided by Inateck.
Filed in categories: Reviews
Tagged: Type-C USB, USB hub
Inateck 9 in 1 USB-C Hub review originally appeared on The Gadgeteer on October 20, 2018 at 9:30 am.
Note: If you are subscribed to this feed through FeedBurner, please switch to our native feed URL http://the-gadgeteer.com/feed/ in order to ensure continuous delivery.
Recent Comments