Facebook has announced new changes to the way it maintains your news feed. If you’re a regular folk with a moderate social activity online, the changes are for the best. If you’re a business that heavily relies on referral traffic, buckle up.
The announcement says the changes are meant to better display the content that matters to you. The changes were necessary because people keep posting more and more stuff, and as this content expands, its reach gobbles up under its own heft.
In other words words, as you keep subscribing your interests to Facebook, the news feed gives you increasingly more posts, so it’s easier to miss one with each passing second of you not using Facebook. And usage time is precisely the one thing that remains unchanged in the face of these changes. If anything, some will probably reduce their Facebook time because of this clutter. But I digress.
Balancing out your feed
In order to give you the right mix of updates from who and what you follow, Facebook has decided to favor friend posts over Pages. Photos, videos, status updates or links posted directly by the friends you care about will be higher up in News Feed, to reduce your chances of missing that content.
“If you like to read news or interact with posts from pages you care about, you will still see that content in News Feed. This update tries to make the balance of content the right one for each individual person,” according to Max Eulenstein, Product Manager, and Lauren Scissors, User Experience Researcher.
Another improvement targets people who aren’t on the receiving end of tons of content every day. Facebook says it has relaxed a rule where previously the network would prevent users from seeing multiple posts from the same source in a row.
The third change has to do with those annoying nags about your friends liking or commenting on a post. These events will appear lower down in your News Feed or not at all, according to the aforementioned duo.
Pages could require a quality uptick
Although not addressing businesses directly, Facebook does have a few words of caution for those who rely on the network to stay relevant and put bread on the table. Eulenstein and Scissors stress that these changes will affect page distribution with varying mileage, “depending on the composition of your audience and your posting activity.” More importantly, “In some cases, post reach and referral traffic could potentially decline.”
According to the social network, businesses who rely on Facebook referral traffic should strive to feed meaningful content to their audience and remember to consult the Page post best practices every now and then.
The fine gents at IDC recently crunched some numbers and concluded that smartphone growth is poised to remain strong through 2019, chipping away at the PCs dwindling market-share. In just a few years from now, our pocket computers will reportedly make up 77.8% of the total smart connected device (SCD) shipments.
The IDC report says that the combined total market of connected devices – smartphones, tablets & 2-in-1s, and and PCs – is set to balloon from 1.8 billion units in 2014 to 2.5 billion units in 2019. Smartphones for their part will grow to represent the majority of total smart connected device (SCD) shipments by quite a margin, according to the metrics firm.
By 2019, IDC expects the distribution to be as follows:
- 77.8% smartphones
- 11.6% PCs (both desktop and laptops)
- 10.7% tablets
Tom Mainelli, Program Vice President for Devices at IDC, said “ultimately, for more people in more places, the smartphone is the clear choice in terms of owning one connected device.” While the claim does the numbers justice, I’d also like to probe a different reality. Say you’re marooned on an island, somehow there’s Internet there, and you have a choice between a big screen and a small one. With commuting and society out of the way, would you still prefer a smartphone over a desktop/laptop computer as an Internet machine?
Personally I appreciate screen real estate far more than portability. But I’m not a smartphone zombie, so you could say I’m biased. For some people, aspects like battery life, processing power and other stuff can make a world of difference. So how about yourself? What boat are you in? Sound off in the comments.
There have been hundreds, if not thousands of influential characters in the history of our civilization who have stressed the importance of accepting failure in the pursuit for success. But few have articulated it in such a compelling manner as the founder of Ford Motor Company, Henry Ford.
One who fears failure limits his activities. Failure is only the opportunity to more intelligently begin again.
- Henry Ford
Although he didn’t actually invent the automobile, Henry Ford has been perhaps the most important figure in the automotive industry. He transformed what had been a simple utilitarian machine into a revolutionary method of transportation.
Sometimes innovation relies on other people’s ideas. The important thing is to truly believe you can add extra value to that idea, and then execute on that belief.
Ford developed and manufactured the first automobile that people could actually afford to buy and use. His vision was vast, extending far beyond the actual object he was selling. For example, his dedication to systematically lowering costs gave birth to several business innovations, including a franchise system.
Ford was also a pacifist, and an outspoken one at that. He was responsible for several antisemitic texts, including The International Jew, a four-volume set of pamphlets published and distributed in the early 1920s by Ford himself.
Going green usually translates into extra spending, but there’s one particular field where taking the eco-friendly route can actually become profitable – cloud computing. Plus, you get that warm fuzzy feeling deep down inside that you’re doing the right thing.
In light of Earth Day, I’d like to talk about protecting Mother Nature by moving server-dependent operations to the cloud – as opposed to maintaining an on-premise server in a dark room of your establishment.
A blessing for technology and our planet alike
I recently came across an analysis by Sustainable Brands from 2013 that outlines the major advantages of cloud computing from an environmental standpoint. It could have well been written yesterday because it speaks truths that are still applicable in cloud services today.
I won’t bore you with the details, so here’s the general idea. Cloud service providers can afford to lavish huge sums of cash on things like cooling, allocating resources, and equipment efficiency because keeping their data centers running is their day job. They’re good at it. So instead of you spending a fortune to do the same for your clunky server (and also make inefficient use of it), why not let the experts handle your data? This results in major energy savings, a smaller carbon footprint, and a smaller bottom line. And wouldn’t you know it, data centers are statistically safer too.
The environmental-friendly part is also true. It wasn’t five years ago, but it is today. More and more data centers are now being powered by the sun and geothermal energy than ever before. In fact, clouds have become so ubiquitous that the focus now is to enhance them, not to fix their problems.
Cloud supporters everywhere
Cloud computing has also created new business opportunities. Take Hydro 66. Based in stone-cold Sweden, these guys offer the word’s first 100% hydroelectric-powered collocation data center. Since inception Hydro 66 has allegedly saved 537,342 kWh and their energy bill is twice as small as that of an equivalent cloud in the UK.
Truth be told, many of their bragging rights are actually owed to the weather. The company’s location in Boden in northern Sweden is naturally cold all year around, which means they spend far less on cooling than many other cloud vendors. For who’s asking, winters in Sweden regularly go below -20°C (-4°F) and summers peak at 25°C (68°F).
iPhone-maker Apple Inc. is no stranger to the power of the cloud either. Just a few years ago iCloud had yet to be born, and today Apple is a leader in cloud computing. All of the company’s American data centers are 100% powered by renewable energy. That’s right. Not a single Apple server relies on fossil fuels.
In recent years, companies like Upsite Technologies have made a name for themselves in the cloud industry for providing specialized services to keep data centers running in pristine condition. Upsite is an expert in data center airflow management. It provides a suite of products and services designed to optimize data center cooling systems which allows managers to reduce energy costs. Again, two birds killed with one stone – greener and cheaper operations (this time for the cloud vendor itself).
The list could go on, but any way you look at it, the cloud plays an instrumental role our society. Today, more than any other day of the year, clouds everywhere deserve to make themselves heard. To celebrate Earth Day and help with the movement, visit the Earth Day Network at earthday.org. Or you can skip all that and plant a tree.
For quite a while here at 4PSA we’ve been playing with 3D printing. Why? Because it’s cool and because there are a lot of applications for 3D printing, even in the cloud. We don’t claim to be 3D printing experts, but we took a power user (read engineering) approach to 3D printing which may be interesting to some of you. So we decided to share some stuff with you in a series of articles dedicated to 3D printing.
In today’s article, we want to answer a pretty interesting question – just what level of precision can you expect from a FDM printer? Fused deposition modeling is the most popular and accessible printing technology. The market is already full of printers and even kits you can use to build your own printer that cost as little as a few hundred bucks. For the test, we used one of the best printers available – Ultimaker 2. And yes, Ultimaker 2 may be worth more than a few hundreds, but it’s still an affordable piece of engineering.
The price difference is mostly tied to build quality, which eventually affects precision. Ultimaker promises 20 micron layer resolution (0.02mm), but there are some catches. This refers to the vertical resolution, which is given by the smallest step of the stepper motor which moves the table up and down (the Ultimaker’s print head only moves parallel to the ground on a horizontal plane, while the table is moved down progressively as printing advances). The nozzle width is 0.4mm, which at the moment of writing is the smallest width available. This means that, if you were to print a single line of plastic material on the printing bed, it would be around 0.4mm wide.
Printer Precision Testing Procedure
We designed some models in Sketchup and we wanted to check the difference in dimensions between the designed body and the actual printed body.
For this article, we didn’t use complicated shapes:
The bodies were arranged in the following layout to minimize printing head travel time:
We did three separate prints with the following resolutions: 0.15mm, 0.1mm, 0.06mm using the official black Ultimaker ABS filament.
Slicing was done with Cura 14.09 using the Ultimaker 2 profile. The following parameters were used:
- Fill ratio 20% – provides a good balance between mechanical restriction and ABS contraction when the part is cooled. If you want mechanical resistance, this will not be enough.
- No brim or raft
- No support structures.
- Shell thickness: 0.8mm
- Retraction was enabled.
- Bottom/top thickness: 0.6mm
- Extruder temperature was set to 260 degrees Celsius, while the bed temperature was set to 90 degrees Celsius.
For the 0.15mm print, we used paper glue to make the model stick to the printing bed. On the 0.1mm print, the paper glue almost worked, all items were fine, except one corner of the plate which decided to warpnoticeably.
For the 0.06mm print, we needed to use something stronger: old ABS pieces dissolved in acetone resulted in a strong solution for keeping the parts glued to the printing bed.
3D Printing Process
Unfortunately, like with any FDM printer, the printing process was quite slow. The printing process on 0.15mm took almost 4 hours, on 0.1mm almost 6 hours, and on 0.06mm more than 9 hours.
Another problem is that plastic melts, which means that a lot of fumes and odors will invade the room. Most people don’t like the smell and and even if you get used with it, it’s not very healthy. What’s more, one of the ingredients of ABS is believed to be carcinogen for humans.
Disclaimer – this happens while the filament melts, otherwise the plastic is considered hazard free.
As you can see in the pictures below, the model is not filled with plastic during printing. Furthermore, the surface of the printed objects is not 100% smooth, but it’s quite acceptable.
For all prints, we used around 27g of plastic. The cost is very reasonable, less than 1.5 EUR / print for materials.
3D Printing Test Results
Once the parts were printed and cooled down, we took a digital caliper with +/- 0.1mm error margin and we measured them all. We listed the results in the table below.
So, What Can We Expect?
Generally speaking, the error percentages are proportional to the chosen resolution: by selecting the highest precision, you will get the lowest errors. Printing at 0.06mm leads to the best results; however, it’s important to carefully consider if such high precision is required. Most of the time, quality improvements are negligible. However, when going from 0.15mm to 0.06mm, printing time doubles, if not more. The graphic below shows this more intuitively – as you can see, the yellow line is closer to Ox axis.
Another interesting fact is the measurements for the 0.1mm print don’t fall exactly between the 0.15mm and the 0.06mm print. We suspect a variance in the room temperature, because the 0.1mm print was done in a much colder room (around 14 degrees C). With FDM, it’s best to avoid temperature changes – because of the high thermal latency of the extruder head and printer bed, the parts will contract faster than expected during printing. Ultimaker 2 tries to makes things a little bit better, that’s why the printer is closed on two sides. However, there is still plenty of thermal transfer.
The infill rate will also affect this precision. In this test we used only 20%, but if you want something more durable you should go above 40%, which will affect precision.
Is 3D printing ready for the home ? Depends on how brave you are
- During these tests, the extruder stopped printing two times without any apparent reason (fixed by removing and reinserting the material).
- The heated extruder and print plate are a safety risk, at least for kids.
- There’s a lot of manual work to do – it’s not like a paper printer.
- Surfaces of ABS prints have to be finished in order to achieve a professional look, similar to injected molding surfaces.
If you like playing (and you don’t have to be a geek for that), then you will love to get a 3D printer from Santa because:
- it allows prototyping of virtually any part, even quite large (eg: 20 x 20 x 20 cm). It’s true that printing models so large will require a lot of skill.
- printed parts can be quite resistant mechanically (depending on printer settings and material).
- materials are relatively cheap; the printer is not cheap yet, but it’s not that expensive either.
- you do not have to build all models yourself – there are great chances to find something you want on the Internet.
It’s definitely not a mature technology, but eventually it will get there. Playing with Ultimaker 2 was lots of fun! In fact, once you get yourself a 3D printer, you will get ideas for applications from place you can’t even imagine. Printing quality may not be perfect, but parts can actually be used in the real world – it’s not just prototyping. The slow speed seems to be the biggest problem so far, but it’s funny – you get a similar feeling when you wait for the cakes to bake into the oven
There’s a new app in town and it only wants a minute of your time. It lets you filter between the messages that are important to answer and the ones that you can simply ignore, and it promises to be the ideal tool for sending time-sensitive content. The idea is brilliant, the design deserves an excellence award, but the execution is utterly terrible.
It’s hard to succeed with just another instant messaging app today. WhatsApp and Facebook Messenger dominate the scene, while the likes of Viber, Skype, Kik and Snapchat act like an electric fence for the top 10 chart in IM apps. However, there may still be room for a contender, but only if the developers clean up their act.
A straight-to-the-point instant messenger
GAM is short for “got a minute,” and the app does exactly what its name implies: it lets you buzz contacts with this very question, and they decide if they can be bothered or not. Swipe upwards for no or down for yes. When you accept, the 60 second timer is activated. If the recipient doesn’t activate the chat within a 60-second time frame, your message is deleted from their phone as if you never even sent it.
“Today, our smartphone is our biggest distraction. With instant messages constantly being sent and received in bulk, it’s difficult to prioritize between important and unimportant messages,” says Klatch Labs. “At the end of the day, all you need is a minute to get a message across and who doesn’t have 60 seconds right?”
Klatch says the purpose of GAM was to prioritize your messages and interactions and be able to focus on one conversation at a time. Other side effects include avoiding awkward chat-endings and increased privacy, according to the developers.
Best-in-class design, awful functionality
The concept is smart, and the GUI is the work of a very talented team of designers. However, for all its visual oomph, GAM is incredibly hard to use and it’s frustratingly non-responsive. We could barely get a convo going on both iOS and Android, and the setup process itself was painfully long.
It’s a shame and even a bit ironical that this design masterpiece has received such poor implementation. Klatch Labs had better brush up on their coding skills and give GAM a do over. They have a huge opportunity to make it big in the IM space, and they’re wasting it with every second that passes without a fix to GAM for iOS and Android.
On April 19th, 1965, Gordon Moore wrote a technical paper that predicted the increase of computing power. He initially observed that the number of transistors per square inch on integrated circuits had doubled every year since ICs had been invented, and he later perfected the law doubling time to two years.
This month, Moore’s Law turned 50. To this day his prediction holds true, but it might not reflect reality in a few years from now. The reason? Our need for ever-smarter computers requires that we rethink the way we build them, and perhaps even the way we operate them.
“In the beginning, it was just a way of chronicling the progress,” Moore said in an interview with the BBC. “But gradually, it became something that the various industry participants recognized as something they had to stay on or fall behind technologically. So it went from a way of measuring what had happened to something that was kind of driving the industry.”
Mark Bohr, Intel’s director of process architecture and integration, tells the British broadcaster that having Moore’s law as a guiding north star has kept the industry on pace, driving competition and churn. Without it, we’d still be using the desktop PCs from a decade ago, he said.
Who will break the law, and when?
Bob Colwell, Intel’s ex-chief architect, made some waves a couple of years ago when he publicly announced his prediction that Moore’s Law would no longer apply starting with 2022. Most scientists agree.
The problem lies in the key ingredient used to make the chips – silicon – and how we build and operate computers. Silicon, for its part, has a physical limit where it becomes unusable as a semiconductor to create the tiny transistors that make up a chip. As we start to look at new materials, like graphene or phosphorene, the computer’s architecture also needs to be reimagined.
There are a few tech juggernauts who are currently trying to tackle the problem, and perhaps even breathe new life into Moore’s Law. Andrew McAfee is one of them. The developers he spoke to say there are code-centric advances that make Moore’s Law look “ridiculous in comparison.” However, if we find a way to translate the law and apply it to software, it would reignite it. Startup QxBranch is also pushing hard to ensure that Moore’s law lives on – through quantum computers.
Going by a pair of articles in the Business Insider (1,2), recent advancements in quantum computer development bodes well with the predictable demise of Moore’s Law. Breakthrough research from UCSB and Google has given birth to the first stable array of nine qubits (quantum bits, the counterpart in quantum computing to the binary model in classical computing). This prevents errors that were previously impossible to fix. It’s not the last piece of the puzzle, but according to MIT physicist Scott Aaronsen, this particular experiment gets us “half way there.”
One major difference between traditional binary computers and quantum computers is that the latter promises to address “unsolvable problems.” While a regular PC can encode information in only one of four possible combinations (00, 01, 10, 11), a quantum computer can juggle all four at once. Problems that take years to solve with today’s computers would be tackled in hours, perhaps even minutes by a quantum computer.
An extra benefit is that quantum computers can learn from experience, being able to tweak the code of an erroneous program and stop certain problems from arising in the future.
The 5 nanometer limit is assumed by some experts to mark the end of Moore’s law. Chips today are made on a 14nm scale, shrinking by a factor of two once every two years. Quantum computers are still in their infancy, so the next problem to solve will be to switch from silicon to a new semiconductor that allows us to shrink transistors even further – and buy us some time.
Sports equals long and healthy living for those who do it, entertainment for those who watch it, and profit for those who coach and manage. A by-product of sporting events is usually a piece of metal that gets hung around the winner’s neck. But is it of any real value to humanity? If you ask satirical novelist Joseph Heller, the answer is no.
Born on May 1, 1923 in Coney Island in Brooklyn, New York, Heller was the son of poor Jewish parents, Lena and Isaac Donald Heller, who originated from Russia. Heller loved to write from an early age and he reportedly knew he wanted to do this for a living at around age 10. He was not deterred when the New York Daily News rejected his story about the Russian invasion of Finland. Had he let this episode hinder his pursuit to become a writer, the following line would have probably never been written:
Like Olympic medals and tennis trophies, all they signified was that the owner had done something of no benefit to anyone more capably than anyone else.
Catch-22, Joseph Heller
The line originates from Heller’s famous satirical novel, Catch-22. He began writing it in 1953 but was only published in 1961. The book is regarded as one of the greatest literary works of the twentieth century and uses an atypical third-person omniscient narration. It’s non-chronological and describes events from the points of view of different characters, creating separate story lines that are not necessarily in sync.
In 1981 Heller was diagnosed with Guillain–Barré syndrome, which left him temporarily paralyzed. He later made a full recovery and even went on to marry one of the nurses appointed to look after his well being – Valerie Humphries. In December 1999, shortly after completing his final novel, “Portrait of an Artist, as an Old Man,” Heller died of a heart attack at his home in East Hampton (Long Island).
As cloud components gain more and more acceptance in IT architectures, more companies are relying on cloud computing for business processes than ever before. Storage is the primary usage scenario (59%), followed by business continuity/disaster recovery (48%), and security (44%), according to CompTIA, a technology research and market intelligence company.
Cloud computing is becoming an integral part of the IT landscape, a requirement even. In its report – 5th Annual Trends In Cloud Computing – CompTIA identifies four distinct phases that a given company will go through as it incorporates the cloud into their business operations.
Businesses that are on the cusp of adopting cloud technology usually prefer to build familiarity with the model first. In the Experiment phase, companies get acquainted with terminology and basic working principles, amd may go as far as to build sample virtual instances or use free trial software to test the cloud system in question.
“These proof-of-concept undertakings will most often be performed on public cloud systems, since they are readily available and require minimal investment. Companies may investigate the pros and cons of private clouds, but very few will begin building out those systems at this time,” says CompTIA.
Once the testing phase is over and done with, a company whose managers are keen to leverage cloud benefits will move into a non-critical use stage, according to the research firm. At this stage, cloud systems will be used for operational workflow, but will not be entrusted with sensitive data.
“Typically, a peripheral system will be chosen, which still allows companies to learn the fine details behind a cloud transition and also gain a first-hand appreciation for the integration challenges.”
This stage can be used to assess the potential benefits of cloud integration. If successful, the process moves into the third and most important step:
Companies will only move into this phase when they’ve achieved total comfort using the cloud model. Businesses that find themselves at this stage have mitigated their security concerns, have understood all the benefits, and view cloud systems as viable, sometimes even critical for their operations.
As the company changes the way it procures and utilizes technology, new policies and procedures will be built, while existing ones will be modified. The final step in the progression – which follows below– is basically a conclusion to these three phases.
Only companies that are well versed in cloud matters will find themselves at this stage, according to the study. A company currently experiencing Transformed IT will have embraced the cloud five to seven years ago – around the time cloud computing actually started to take shape. This type of company will have most likely built their entire business around cloud solutions.
“Here, companies are not simply moving existing systems or applications into the cloud; they are changing the way they work in order to reap the full benefit,” CompTIA explains.
For these companies, the cloud is no longer a separate entity that they manage in addition to other aspects of the business, but a vital component of their company’s architecture as a whole.
“Even with the typical inflation that comes with self-classification, very few companies place themselves in the Transformed IT stage. Full Production may be slightly inflated, but the majority of firms today should fall somewhere in the middle two categories. A healthy number of firms are still entering the market, led by small firms and those firms that may have a more cautious approach based on technology familiarity or regulatory concerns,” CompTIA concludes.
The quantitative online survey was carried out on 400 IT firms in the United States during July 2014. The margin of sampling error at the 95% confidence level was +/- 5.0 percentage points. Research Now helped CompTIA with the data collection using an independent panel.
Video games portray the zombie apocalypse as a fun departure from mundane affairs, but if it were to really happen things would probably play out differently. For instance, a seemingly trivial problem like opening a can of food would become a serious issue in the absence of a can opener, or a knife. Luckily someone has imagined this scenario and offered a solution.
CrazyRussianHacker doesn’t need an introduction, but for those of you who don’t keep tabs on the YouTuber’s activity, he’s got quite a few survival tips on offer in an extensive video library. Including how to open a can by simply rubbing it against concrete and then pressing the edges to pop it open. Of course, if you can’t even manage get hold of a knife during a zombie apocalypse, a sealed can of food is the least of your problems.
With over 23 million views under its belt, this video might be familiar to some of you. But it’s still a gem, if only for the priceless cat remark two minutes into the clip. Enjoy!