Social Media

Talent trickles out of Snapchat as Timehop founder leaves

Talent trickles out of Snapchat as Timehop founder leaves

Less than a year after joining Snapchat, social media nostalgia app Timehop’s founder Jonathan Wegener is departing the company. He tells TechCrunch he wants to build his own thing again, but the fact that the #1 teen social network isn’t exciting enough to stay at is telling.

Share prices are down, user growth has slowed to a crawl in the face of Instagram’s competition, and it’s missing revenue targets. Reports increasingly indicate Snap is a tough place to work, ruled by CEO Evan Spiegel with an iron fist, and run by his inner circle even when they lack experience . SVP of Engineering Tim Sehn, early employee Chloe Drimal, VP of HR and Legal Robyn Thomas, and VP of Securities and Facilities Martin Lev have all parted ways with Snap since July.

Timehop (left) vs Facebook’s clone On This Day (right)

Wegener’s Mom made him a Snapchat cake in 2016, foreshadowing his role there

A tipster pointed to Wegener’s Twitter bio noting he’d left Snapchat. We asked Wegener why he’s leaving and he told TechCrunch “I had a great year at Snap — love the product, team, and learned a ton. But ultimately I’m most passionate about building companies and products from scratch, and I decided to take some time off to travel and be inspired before I get back into the swing of things in 2018. Keep an eye out next year.”

Timehop had raised $14 million from investors for its app that showed your social media posts from that day in years past. It had 6 million daily users by 2014. But Facebook cloned it with its “On This Day” feature in 2015, which had grown to 60 million daily visitors by 2016. A Timehop redesign bombed and Wegener left in January 2017 to go to Snapchat.

There he must have found a similarly grim situation. Also cloned by Facebook and its subsidiary Instagram, Snapchat had lost momentum. The Snapchat Memories feature most similar to Timehop saw little evolution this year. Stock he was promised surely has a much lower value now than Wegener might have suspected before the hyped May IPO. And fighting a colossus in a crowded space sounds a lot less fun than starting something fresh.

Published at Thu, 21 Dec 2017 17:38:36 +0000

17 0

Facebook’s fake-news flag no longer flies as related articles takeover

Facebook’s fake-news flag no longer flies as related articles takeover

Facebook’s ongoing battle against fake news is moving forward as the company removes one tool and replaces it with another. On Wednesday, December 20, the company announced that the disputed flags will no longer be used to alert the user to potential Facebook fake news but that potential fakes will instead show related articles to offer more context.

Facebook says that the change is based both on academic research and its own studies of user interactions. “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended,” wrote Tessa Lyons, Facebook product manager. “Related Articles, by contrast, are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts. Indeed, we’ve found that when we show Related Articles next to a false news story, it leads to fewer shares than when the Disputed Flag is shown.”

Facebook says that while the flag alerted users to fake news, the label didn’t help users determine which part of the article was false. Another problem is that the icon required at least two fact checkers to determine its accuracy before giving the piece that false label. In many cases, Facebook says, false articles were slipping through because there weren’t enough organizations in that particular area to check the facts.

After testing Related Articles earlier this year, Facebook says that using relevant links with the original article led to fewer shares. Those additional links often include articles from the same fact-checking organizations that provided information for those disputed flags. The articles help readers see not only that the article isn’t correct, but what part of the data is off. Between the two different techniques, while shares dropped with related articles, clicks to the full article were the same as with the disputed flag icon.

Facebook is also launching a new initiative that will further the company’s research on fake news and how to prevent its spread. The company says the expanded research won’t be obvious right away, but could help the platform improve over time.

The social media platform has launched several tools over the last year designed to curb fake news, including that now obsolete disputed flag as well as an “i” icon to learn more about the source, new trust indicators, and eliminating the ability to edit a link preview.

Editors’ Recommendations

Published at Thu, 21 Dec 2017 16:56:25 +0000

13 0

Grammarly finds all those annoying grammatical errors so you don't have to

Grammarly finds all those annoying grammatical errors so you don't have to

The digital editor every person needs.
The digital editor every person needs.
Image: pexels

Whether you’re sending out a work email or writing a blog post, grammar errors can make you seem unprofessional or, worse, incompetent. Even if you write for a living, like we do, the truth is that mistakes happen. 

Sure, you know the difference between “your” and “you’re” as well as “their,” “there,” and “they’re”… right?

Grammarly Premium  is a virtual personal editor that can help clean up your emails, reports, and just about anything else you compose on a computer. Once you download Grammarly, it’ll work across services like Gmail, Facebook, Twitter, LinkedIn, and practically anywhere else you’d write on the web, without opening up an additional program. The Premium version of the app detects everything from punctuation mistakes and contextual errors, to weak vocabulary and potential plagiarism. 

An annual subscription normally goes for $139.95, which is money well spent if you create a lot of content and send a lot of emails. But if you act quickly, you can snag it for $69.99 — nearly 50% less than its usual price.

Image: Grammarly

Published at Thu, 21 Dec 2017 15:13:15 +0000

17 0

Government requests for your Facebook data continues to grow, report says

Government requests for your Facebook data continues to grow, report says

Seventy-eight thousand, eight hundred and ninety — that is the number of times Facebook received government requests to access user data worldwide, and in over half of the cases from the U.S., non-disclosure prevented the user from even knowing about the access. The numbers come from Facebook’s Transparency Report (previously called the Government Requests Report) covering the first six months of 2017. The report, released on Monday, December 18, also tallies the numbers of requests for potential intellectual property violation, such as stolen copyrighted images and counterfeit products, for both Facebook and Instagram.

The government requests for data from Facebook increased by 21 percent from the previous six months, jumping up from 64,279. The U.S. was responsible for more than 32,000 of those requests, with about 85 percent of them resulting in providing some level of data. Facebook says that, in the U.S., 57 percent of those requests included a non-disclosure clause, which meant that Facebook could not alert the user to the government’s request for access.

The jump in content restrictions or requests to remove content saw an even larger jump, but one particular incident drove those numbers up higher than normal. Facebook says that a video of a school shooting in Monterrey, Mexico was removed 20,506 times. That incident alone accounts for most but not all of the increase in content restrictions from the previous report, with those numbers jumping up to 28,036 compared to 6,944.

When the transparency report first launched in 2013, the number of requests for data was only about 25,000 worldwide. Since then, Facebook’s user count has nearly doubled, however, up from 1.15 billion to 2 billion users active every month.

While Facebook reveals this data every six months, the social media platform is now expanding the report to also include requests for intellectual property violations. Users reported copyright violations some 224,464 times, while trademark violations were 41,854. The report also includes reports for counterfeits, which hit 14,279. Instagram, meanwhile, had about 70,000 reports for copyright, around 16,500 for trademark and about 10,000 for counterfeits.

As the first time the company has reported on intellectual property, there is no data to compare the numbers to, but in all three categories, for both Facebook and Instagram, the number of reports was higher in June than January, suggesting an increase in the number of reports from the start of the year.

“We believe that sharing information about IP reports we receive from rights holders is an important step toward being more open and clear about how we protect the people and businesses that use our services,” wrote Chris Sonderby, Facebook’s deputy general counsel. “Our Transparency Report describes these policies and procedures in more detail, along with the steps we’ve taken to safeguard the people who use Facebook and keep them informed about IP.”

Both the government requests and the IP reports can be accessed from Facebook’s transparency webpage.

Editors’ Recommendations

Published at Tue, 19 Dec 2017 20:44:07 +0000

8 0

Facebook’s facial recognition now finds photos you’re untagged in

Facebook’s facial recognition now finds photos you’re untagged in

Facebook wants to make sure you know about and control the photos of you people upload, even if they don’t tag you. So today, Facebook launched a new facial recognition feature called Photo Review that will alert you when your face shows up in newly posted photos so you can tag yourself, leave it be, ask the uploader to take the photo down, or report it to Facebook.

The feature should give people confidence that there aren’t pics of them floating around Facebook that they could see but just don’t know about. It could also help thwart impersonation. But Facebook tells me it has no plans to use facial recognition to enhance ad targeting or content relevancy sorting, like showing you more News Feed posts from friends who post untagged photos of you or ads related to locations where you appear in untagged photos.

If you’re in someone’s profile photo which is always public, you’ll always be notified. For other photos, you’ll only get notified if you’re in the audience for that photo so as to protect the uploader’s privacy and not alert you about photos you’re not allowed to see. A Photo Review section of the profile will keep track of all your untagged but recognized photos.

Facebook’s applied machine learning product manager Nipun Mather tells me the feature is designed to give people more control, make them feel safer, and provide opportunities for nostalgia.

Facebook is also adding a new overarching photo and video facial recognition opt out privacy setting that will delete its face template of you and deactivate the new Photo Review feature as well as the old Tag Suggestions that used facial recognition to speed up tagging when friends posted a photo of you. These will all roll out everywhere over the next few weeks except in Europe and Canada where privacy laws prohibit Facebook’s facial recognition tech.

Facebook is also using the feature to assist the vision impaired. Now Facebook’s machine vision-powered feature that describes what’s in a photo will also read aloud the names of untagged friends.

“Over time our goal is to make these features available everywhere . . . but right now we’re focusing on markets where tag suggestions are available” says Facebook’s Deputy Chief Privacy Officer Rob Sherman.

While Tag Suggestions might be seen as weakening privacy, Photo Review could be perceived as enhancing it and might get a pass from regulators. Whether it’s an unauthorizied photo of you that you want taken off Facebook, an embarassing pic you don’t want tagged but want to monitor comments on, or someone trying to pretend to be you, Photo Review gives people more visibility into how their likeness is used.

Published at Tue, 19 Dec 2017 15:56:03 +0000

9 0

Not too proud to beg? Facebook downgrades posts that ask for likes

Not too proud to beg? Facebook downgrades posts that ask for likes

Posts that solicit likes and comments by flat out asking for them will soon be less prominent on Facebook. On December 18, the social media platform announced that posts that use “engagement bait” will appear less often in the news feed, thanks to a new machine learning algorithm. The change means that posts that say “like if you agree” or “tag someone who looks like this” will be penalized in Facebook’s algorithms that determine which posts show up in the news feed.

The change begins with downgrading single posts that use engagement bait, a change that starts rolling out this week. But that’s just the start — Facebook says Pages that use these tactics on an ongoing basis will see an even further drop, though the platform is giving these users a few weeks to adapt before rolling out that change.

Engagement bait comes in a number of different forms, but it all boils down to asking for a specific interaction. Facebook’s current algorithms promote posts that have more interactions, which means posts using the tactic often show up in more news feeds. Facebook will soon start downplaying a number of different types of engagement bait, including vote baiting, or asking for a specific reaction in a survey-like post. Posts that ask for likes, shares, tags, and comments will all be placed further down in the news feed by the new algorithms.

Facebook is aiming to eliminate those spam-like posts, so not all posts that ask for reactions will be affected. Posts that ask for advice, recommendations, or help will not be downgraded, Facebook says. That means that a missing child report asking for shares won’t be downgraded in the new algorithms, for example. The company is focusing on engagement bait posts that are not authentic, one of the company’s core values for the news feed.

For users who find posts randomly asking for likes, tags, and comments annoying, the change will help clean up the news feed, eliminating some of the posts that have a large number of interactions not because the post is helpful or humorous but simply because they asked for the interactions.

For some Pages and other publishers, the change means yet another algorithm that affects the reach of their posts. Because users are more likely to interact with a post when simply asked, some social media experts recommended the tactic in the past. Now, Pages will have to work to compose posts that gain interactions authentically, not through solicitation.

The news feed update joins a handful of other recent changes Facebook has launched, including eliminating that ticker feed and testing a tool that allows users to post to the news feed but not a profile. Earlier this year, Facebook rolled out similar algorithms that targeted click-bait links, penalizing posts that use those tactics, and also rolled out a filter downgrading clickbait phrases.

Editors’ Recommendations

Published at Mon, 18 Dec 2017 15:58:03 +0000

16 0

Facebook is clamping down on posts that shamelessly beg for your engagement

Facebook is clamping down on posts that shamelessly beg for your engagement

A lot of crap gets shared on Facebook, but coming soon the volume may be a little less after Facebook made a move to penalize content that shamelessly begs people for engagement.

The social network giant said today that it will penalize Page owners and people who resort to “engagement bait,” which means posts that encourage users to like, comment or tag people in the comments section in order to gain wider visibility of their content.

The incentives — such as “Share with friends to win a free trip” or “Like if you’re an Aries” — gets content shared through engagement, ultimately helping the post, and the Page owner/author, grow its reach as users interact and it shows up on their friends’ Newsfeeds.

Not so now. A new tweak to the Newsfeed algorithm will mean “stricter demotions” for Pages, and/or individual, who adopt engagement bait tactics. Starting in a couple of weeks, offenders will have the total reach on all of their posts reduced if their content is begging or baiting users to interact. As you’d expect, serial offenders will be hit hardest.

But, Facebook is extending an olive branch and — initially, at least — engagement baiters can earn their original reach back with good behavior, i.e. less of the sludge and ‘better’ content all round.

Three examples of “engagement baiting” shared by Facebook

Facebook did specify that there are some exceptions to this clampdown, and that includes examples like a missing child report, raising money for a cause, or asking for travel tips, to quote the company directly.

The crackdown itself is led by a machine learning model that the social network said has been fed “hundreds of thousands of posts” to detect different kinds of engagement bait.

This push to close down some of the spammier types of content follows a clampdown on sites with crappy web experiences — for example those caked in advertising — and moves to weed out clickbait in multiple languages.

Facebook is, of course, still answering tougher question about the overall impact that its service is having on society across the world. In addition to explaining how Russian actors used the site to try to manipulate the U.S. general election and the UK’s Brexit vote, it is also being criticized from former executives who accuse it of “destroying how society works.”

Published at Mon, 18 Dec 2017 11:03:25 +0000

8 0

Facebook made a game with Porgs and that's really all you need to know

Facebook made a game with Porgs and that's really all you need to know

Image: facebook

If there’s one thing we should all be able to agree on about The Last Jedi it’s most definitely the Porgs

Now, Facebook is helping you relive all the best Porg moments from The Last Jedi and then some, thanks to the social network’s new Porg Invasiongame. 

Playable on Facebook’s app and website, as well as Messenger, the adorable — and spoiler-free — game puts you on board the Millennium Falcon as it’s quickly being overrun by Porgs. Switch off between BB-8, who must frantically snatch each critter out of the air, and Chewie, who has to fix the destruction the Porgs leave behind.

Image: disney

Image: Disney

If you can keep the Porgs at bay and fix the damage before the time is up, then you can advance to the next level. 

Officially, the game falls into Facebook’s “Instant Game” platform, so don’t expect anything too complex, even though Porg Invasion is admittedly much more fun than most of Facebook’s lightweight gaming offerings. 

Of course, the real reason you’ll want to play, though, is because if there’s one thing The Last Jedi taught us, it’s that there can never be too many Porgs.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f12%2f6a89458c 84ef 7814%2fthumb%2f00001

Published at Fri, 15 Dec 2017 22:27:38 +0000

8 0

EyeEm’s new products aim to understand brand aesthetics

EyeEm’s new products aim to understand brand aesthetics

EyeEm is unveiling new tools to help the brands and marketers using the site to source their images.

Underlying these tools is a technology called EyeEm Vision, which we described in-depth earlier this year. The goal is to expand image recognition so that it’s not just identifying the objects in the photo, but also its aesthetic qualities.

EyeEm’s co-founder and chief product officer Lorenz Aschoff described EyeEm Vision as an extension of the photography marketplace’s broader mission to address “the content crisis” — namely the fact that when EyeEm was founded in 2011, Aschoff felt that there was a “massive flood of images” that had “completely destroyed the visual aesthetics of the web.”

EyeEm aims to fix that by helping brands find beautiful photographs. And Aschoff said EyeEm Vision has been trained to identify many of the visual elements that make for a good photograph — it is, in his words, “technology that understands, in general, beauty.”

At the same time, he acknowledged, “What I think is beautiful might be different from what you think is beautiful.” Plus, individual brands are going to have their own specific standards and guidelines that go beyond beauty. So each customer can upload photos that train EyeEm Vision to identify photos that match their own aesthetic — Aschoff said EyeEm’s analysis is looking at around half a million different factors.

EyeEm personalized search

One of the ways EyeEm is actually deploying the technology is by launching a new Missions Dashboard. Brands use Missions to crowdsource campaign photos from the EyeEm community, and the new dashboard allows them to track how their Mission is going — how many photographers are participating, how many photos have been uploaded and so on. EyeEm says that the average Mission results in more than 100,000 photos, which why it’s important to use EyeEm Vision to surface the photos that best match the brand’s style.

EyeEm is also incorporating Vision into a personalized search product, where marketers can search the EyeEm image library, filtered based on their own brand guidelines. For example, BCG’s 11,000 consultants can now search for images to use in their presentations and marketing materials, and EyeEm will only show the images that are a good fit with the BCG brand.

And while this is less directly related to Vision, EyeEm is also announcing a new program called Custom, where brands can work with EyeEm photographers on custom shoots.

Lastly, if you’re curious about Vision, you can try it out for yourself on the EyeEm website.

Featured Image: Eunice Eunny/EyeEm

Published at Fri, 15 Dec 2017 18:19:04 +0000

7 0

Facebook on how it affects your mental health: It's you, not them

Facebook on how it affects your mental health: It's you, not them

A Facebook employee working in the company's Community Operations Team in Essen, Germany.
A Facebook employee working in the company’s Community Operations Team in Essen, Germany.
Image: AP/REX/Shutterstock

Facebook is a symbol of one of the great debates of the 21st century: Is social media a gift to humanity, or is it a curse that drives us further apart and deeper into our own ideological echo chambers? 

There is no simple answer to that question, which is why it frequently becomes a cultural obsession as it did this week, when a recent video surfaced of a former Facebook executive decrying the negative effects of social media. 

Now Facebook is joining the conversation with a lengthy blog post about its efforts to understand how the social media platform affects users’ well-being. The bottom line is that whether or not social media makes us miserable seems to depend on how we use it, say Facebook’s David Ginsberg, director of research, and Moira Burke, a research scientist. 

“According to the research, it really comes down to how you use the technology.”

“According to the research, it really comes down to how you use the technology,” write Ginsberg and Burke. “For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts.  

Passively consuming social media has been linked to negative effects, whereas active engagement may be capable of boosting well-being, say Ginsberg and Burke. (It’s worth noting that more engaged users are likely more valuable to Facebook’s advertising business.)

That draws a fascinating line between Facebook and critics who argue that social media can have a poisonous effect on people’s self-esteem, their relationships, and their ability to consume and reflect on the news. Facebook’s position seems to be that those unpleasant experiences aren’t caused directly by its product, but by how people engage with the platform. 

That’s a much more optimistic view of social media than what Chamath Palihapitiya, the company’s former vice president for user growth, shared with an audience at Stanford Graduate School of Business last month. 

“The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” Palihapitiya said, describing the habit-forming nature of online interactions (think the rush of receiving comments, likes, and hearts on your social media posts.) 

After these comments became widely publicized this week, he used a Facebook post to clarify that he believes the company is “a force for good in the world.” 

“Facebook has made tremendous strides in coming to terms with its unforeseen influence and, more so than any of its peers, the team there has taken real steps to course correct,” he wrote. 

But Palihapitiya is not the only one alarmed by the way social media influences our behavior. Last month, Sean Parker, Facebook’s founding president, said the company is “exploiting a vulnerability in human psychology” that primes humans to crave validation. 

“Facebook has made tremendous strides in coming to terms with its unforeseen influence.”

Ginsberg and Burke don’t name Palihapitiya or Parker. They do reference scientific studies that are both flattering and unfavorable to Facebook. The company’s own research, in partnership with a Carnegie Mellon University psychologist, found that users who sent or received more messages, comments, and posts to their profile said their feelings of depression and loneliness improved. But another experiment that randomly assigned students the task of reading Facebook for 10 minutes were in a worse mood by the end of the day than those who posted or talked to friends on Facebook. Other research suggests screen time, including social media, takes a toll on teens’ health. 

Negative effects, say Ginsberg and Burke, might be related to the uncomfortable experience of reading about others and comparing yourself negatively to them. Time spent on social media and on the internet might also reduce in-person socialization, which can lead to feelings of isolation. 

Though Facebook has previously commented on its own well-being research, the blog post offers a candid discussion of the negative aspects of social media, along with details about the company’s efforts to understand those dynamics. 

The post doesn’t contain unexpected revelations, but it does include insights about how Facebook views its controversial role in mediating hundreds, if not thousands, of small moments in a person’s everyday life.

Ginsberg and Burke write that the company has already made significant changes to News Feed by demoting clickbait and false news, optimizing ranking so posts from close friends are more likely to show up first, and promoting posts that are “personally informative.” The blog post also announces the launch of Snooze, a feature people can use to tune out a friend’s posts for 30 days without having to permanently unfollow or unfriend them. 

Ginsberg and Burke add that Facebook will continue to research well-being and make new efforts to understand “digital distraction.” It will also put on a summit next year with academics and industry leaders to “tackle” these complex issues.

While the public might wait for Facebook, and the broader tech and research communities, to solve this riddle, Ginsberg and Burke touch on a sensitive subject: personal responsibility. Their focus on how the effects of social media change depending on a user’s style of engagement — mindless scrolling versus active participation — hints at the possibility that users may need to be more aware of (and adapt) their behavior if they want to feel better.

That might be hard, though, for users who count on being able to choose a thumbs-up or heart and move on with their lives. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82913%2f3a5c77e4 acf7 4502 8e87 212134a9f7f5

Published at Fri, 15 Dec 2017 15:30:00 +0000

7 0