The near Field Communications (NFC) Forum has defended its short-range radio standard, and blamed flaws in apps that use the tech for the security vulnerabilities revealed at the Black Hat conference last week.
Charlie Miller, best known for his work in exposing security weaknesses on Apple smartphones and desktops, demonstrated weaknesses in NFC implementations including Android Beam – which allows simple peer-to-peer data exchange between two Android-powered devices using the radio-tag tech – and Nokia’s NFC content-sharing and pairing tech. To do so, Miller tested Nokia’s N9 handset, an NFC handset which runs on the MeeGo system, and the Samsung Nexus S and Google Galaxy Nexus – both of which use Android Beam.
The security researcher began his work scanning the drivers, hardware and program stack on both Nokia Meego and Google Android for problems, using fuzzing, a software testing technique using random data injection to flush out bugs. He found some minor shortcomings using this approach, discovering a vulnerability in Android affecting all “Gingerbread” devices and “Ice Cream Sandwich” smartphones running flavours of Android prior to version 4.0.1.
But he was far more successful finding bugs at the application layer, involving the many applications that interface with NFC technology.
For example, an Android phones running the Android Beam app can simply touch another NFC-enabled Android in order to get it to load a webpage controlled by the toucher. this means the technology can be used to initiate an attack that involves content loaded into a browser, not just the relatively secure NFC driver and kernel stack, greatly increasing the potential for mischief.
The Nokia Content Sharing app running on the Nokia N9 with Meego offers a route into the same type of attack. As with Android Beam, Nokia’s Content Sharing app allows a user to force another person’s smartphone to load a web page without any user interaction. Disturbingly, this works irrespective of the whether or not the “Confirm Sharing and Connecting” setting is enabled.
The Nokia smartphone is configured to automatically pair with Bluetooth devices when its NFC tag-tapping functionality is switched on. In cases where Bluetooth is disabled, the phone will actually turn Bluetooth on and pair with devices without asking for permission, unless Confirm Sharing and Connecting is enabled.
Miller pointed out, for example, that the OS level handler for.png graphics files on the Nokia N9 contains known vulnerabilities. so a potential hacker would only need to force a targeted Nokia user to load a webpage containing PNG exploits in order to hijack his or her smartphone.
In one demo, Miller was able to view files on a targeted Android handset. Hacking the Nokia handset allowed Miller to send texts or make calls on the compromised device.
He concluded that NFC-enabled devices should offer an option to seek user confirmation before allow data received over NFC channel to be processed by application, and that confirmation should be requested by default. NFC exploits are particularly nasty because, as things stand, certain smartphones can be made to download and execute a malicious payload without the user even knowing any interaction has occurred.
Miller’s presentation, Don’t stand so close to me: An analysis of the NFC attack surface, was one of the highlights of this year’s Black Hat USA conference.
The NFC Forum praised Miller’s work, and acknowledged the possibility of app bugs and implementation flaws, while stressing the overall robustness of NFC technology.
“Miller’s demonstration underscores the importance of providing appropriate security measures at the application layer and enabling users to adjust security settings to suit their own needs and preferences,” the NFC Forum said in a statement published by NFC World. “The NFC Forum works to ensure that tools are available to allow applications to operate with the appropriate level of security.”
Debbie Arnold, director of the NFC Forum, told NFC World.
however, the NFC Forum works to ensure that tools are available to allow applications to operate with the appropriate level of security. These tools include: (a) Signature RTD (NDEF Signing), a specification the NFC Forum has released to digitally sign messages transmitted between devices and tags; (b) ISO/IEC 13157, a data link security standard to complement higher-layer security, originally developed by the standardization body Ecma International; (c) application security (end-to-end encryption) defined by the service provider; and (d) additional security layers in service providers’ respective back-end systems.
All of these activities and mechanisms work hand-in-hand. NFC solution providers may add security measures to their applications as they see fit, including both required and optional user actions to enable or disable functions.
Miller’s demonstration underscores the importance of providing appropriate security measures at the application layer and enabling users to adjust security settings to suit their own needs and preferences.
Smartphones from Google, Nokia and Samsung already ship with built-in NFC technology while Apple and Microsoft are both widely expected to add the short-range radio tech later this year. the killer application for the technology is “pay by tap”, which has prompted the launch of many competing mobile wallets, including Google’s Google Wallet, Orange’s QuickTap, Visa’s PayWave and MasterCard’s PayPass.
Additional security commentary on Miller’s presentation can be found in a blog post by Sophos here. ®
You may already be using Google Reader, Google’s Web-based RSS reader, but you probably haven’t figured out every advanced trick for getting the most out of this free RSS syndication service.
RSS (aka “RDF Site Summary” or “Really Simply Syndication”), a feed-based communication system that most websites support, makes it easy to stay abreast of your favorite blogs from a single page. Though some third-party programs and even some browsers can help you organize your favorite RSS feeds in one place, Google Reader’s Web-based structure means you can set it up on one computer and then open it anywhere by logging in to your Google account and heading to reader.google.com.
Google Reader is simple to use once you’ve set it up, but your first time with the service can be a bit confusing. We’ve assembled all the tips you need to collect your RSS feeds and have them ready to go in short order.
If you already have a Google account because you use Gmail, Google+, or one of Google’s other Web services, signing up with Google Reader is as easy as signing in to your Google account on the Google Reader homepage.
After signing in, you’ll probably notice that your Google Reader page is a bit sparse. Google Reader is designed to serve as a reader for your RSS feeds, so you’ll need to add some of your favorite sites in order to have content to peruse.
To add new feeds to your Google Reader page, click the Subscribe button in the upper-left area of the page. doing so should should cause the page to open a small dialog box where you can add a new feed. In many instances, if you’re adding a feed from a relatively large site, you can simply enter the site’s name or URL and Google Reader will return a list of RSS feeds that you might have been looking for. for example, type PCWorld, and Google Reader will list PCWorld feeds such as Top News, Latest Reviews, and Laptop Stories. Click one of the links to see a preview of stories from that feed, to ensure that it’s the feed you’re looking for; then click the Subscribe button under the description to add the feed to your Google Reader.
Google Reader’s features go far beyond merely adding to and sorting your RSS feeds. if you like to browse the Web socially, you’ll appreciate that Google Reader lets you share any post in your reader via email or on the company’s Google+ social network. (If, on the other hand, you want to rein in Google Reader’s sharing tendencies, see “How to share privately with the new Google Reader.”)
You can also star entries that you find especially interesting. And if you have friends who also use Google Reader, you can see anything they’ve chosen to mark as a starred post.
If you load your Google Reader with feeds, manually scrolling through hundreds of posts every day can become a chore. Instead, try using the J and K keys (or if you prefer, the N and P keys) on your keyboard to move up or down your reader by one post.
Occasionally, searching for a site’s name in the Subscribe box won’t bring up the RSS feed you seek. In these cases you’ll need to capture the site’s RSS feed URL manually and then put it into your Google Reader. the relevant URL usually lurks behind an RSS button or behind a link labeled ‘RSS’. once you’ve copied the URL, you can paste it into the Google Reader Subscribe box and add the feed to your collection.
Manually adding an RSS feed’s URL is an unavoidable annoyance when you’re following sites that don’t support automatic discovery of their RSS feeds. But it can also be a helpful tool in certain situations. Both Craigslist and eBay have handy RSS feed features that allow you to get real-time updates on new auctions or offers on their sites. for example, I’m currently searching for a new apartment; so instead of repeatedly running a search for apartments in my chosen neighborhoods and price range, I grabbed the handy Craigslist RSS feed for my preferred search parameters and subscribed to it through Google Reader. now, I see new apartment listings as soon as they pop up on Craigslist, simply by checking for news on Google Reader.
Similarly, if you want to grab a specific auction item on eBay, you can get an RSS feed for any search that you’d like to make on the site—as well as for specific eBay shops. unfortunately eBay recently rolled out a new search interface that makes locating the RSS feed button more difficult, but you can easily fix that problem by reverting to the old search interface, on which the RSS button appears at the bottom of the page.
These are a few of my favorite creative uses of Google Reader. once you start looking for RSS feeds, I’m sure that you’ll find ways to use them in your daily browsing. for more tips, check out “Getting started with RSS.” if nothing else, Google Reader is a great, free tool for aggregating everything you want to keep track of online in one place.
There really isn’t much too it, you make a campaign, specify how much you are willing to pay each time someone clicks on your ad, and watch the money roll in. if it were this easy, everyone would be a so-called master at Google AdWords. Google AdWords is a skill that takes months to master but with these simple tricks and tips, your return on investment could increase dramatically.
It is not hard to believe, but Google may not be as friendly as you think they are. when you sign up for a Google AdWords account and make a campaign, there are some simple tricks and tips that can be changed to help your ad run smoothly. first, if you are planning to use AdWords for a long period of time, it is essential that you test out different campaigns to different audiences. In this case, we want to make sure that our ads are shown to the public as quickly as possible. Setting you delivery method to Accelerated will help you test your campaign faster and more effectively. the faster you can receive results, the less money you waste, and the more time you can have refining your campaign.
Google AdWords also allows you to run your ad on a content network. now, unless you specify you’re demographic and pages, this can cost you a lot of money with very little return on investment. so before checking your content box, be sure that you have a successful campaign in the Google Search. this will not only help you in finding which campaigns are working but also what each keyword is generating your traffic. These Google AdWords tricks and tips will help you save money and more importantly, increase your return on investment.
For more board tips, make sure you are using the Google keyword tool. Try to find keywords that have high search volume with low search results. this will place your ad on the top of the page without costing an arm and a leg. also, using keywordspy will help in understanding what your competitors are using and how much each keyword is worth. Never be to broad with your keywords, keep every campaign in tight groups and be specific. this will target your audience and have a better chance of a sale.
Now that you know some of the simple Google AdWords tricks and tips, it is your time to start building a campaign. Do not go overboard; keep your daily budget in a range that you can afford. Understand that not every click will produce a sale, and the more you experiment and refine, the more you will master Google AdWords.
Both Google and Oracle said Friday they did not pay any journalists or bloggers for coverage or commentary of their high-profile copyright infringement battle that recently concluded in a California court, but the companies disagreed on what arrangements should be disclosed.
The statements were made in reply to an Aug. 7 order from Judge William Alsup, who oversaw the case. Alsup asked both companies to disclose if they “retained or paid print or internet authors, journalists, commentators or bloggers who have and/or may publish comments on the issues” in the case.
Both companies had until noon Pacific time on Friday to make their submissions, and they appear to have been filed shortly before the deadline.
“Neither Google nor its counsel has paid an author, journalist, commentator or blogger to report or comment on any issues in this case,” Google said in its statement to the court. “And neither Google nor its counsel has been involved in any quid pro quo in exchange for coverage or articles about the issues in this case.”
Google went on to say that the large amount of material written about the case, coupled with the ease in which opinions can be published online, means that some with an indirect financial connection could have “expressed views regarding this case,” but the company declined to provide a list saying it would be long and, in Google’s opinion, falls outside of the scope and intention of the court’s Aug. 7 order.
It cited examples of universities, political and trade associations, people who use Google’s advertising products on their websites, its own employees, consultants and witnesses.
Oracle identified Florian Mueller, author of the FOSS Patents blog, as “a consultant on competition related matters, especially relating to standards-essential patents,” and noted he disclosed the arrangement in a blog posting on April 18. Oracle also said that some staff members had blogged about the case on its website, but Oracle did not seek to approve any postings in advance.
And then Oracle came out swinging against Google.
“In contrast, Oracle notes that Google maintains a network of direct and indirect ‘influencers’ to advance Google’s intellectual property agenda,” Oracle said in its court statement. “This network is extensive, including attorneys, lobbyists, trade associations, academics, and bloggers, and its focus extends beyond pure intellectual property issues to competition/antitrust issues.”
“Oracle believes that Google brought this extensive network of influencers to help shape public perceptions concerning the positions it was advocating throughout this trial,” Oracle’s statement reads.
Oracle went on to single out two people who it said had written on issues related to the case and had links with Google: Ed Black, president and CEO of the Computer and Communications Industry Association, of which Google is a member, and Jonathan Band, whose book was cited as part of Google’s evidence and who Oracle says has links to Google through trade organizations.
It remains unclear why Alsup issued the original order. Neither party had petitioned the court for such disclosure and it came well after arguments were over in the central part of the case.
Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for the IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn’s e-mail address is email@example.com
Mike Olson runs a company that specializes in the world’s hottest software. He’s the CEO of Cloudera, a Silicon Valley startup that deals in Hadoop, an open source software platform based on tech that turned Google into the most dominant force on the web.
Hadoop is expected to fuel a $813 million software market by the year 2016. But even Olson says it’s already old news.
Hadoop sprung from two research papers Google published in late 2003 and 2004. One described the Google File System, a way of storing massive amounts of data across thousands of dirt-cheap computer servers, and the other detailed MapReduce, which pooled the processing power inside all those servers and crunched all that data into something useful. eight years later, Hadoop is widely used across the web, for data analysis and all sorts of other number-crunching tasks. But Google has moved on.
In 2009, the web giant started replacing GFS and MapReduce with new technologies, and Mike Olson will tell you that these technologies are where the world is going. “if you want to know what the large-scale, high-performance data processing infrastructure of the future looks like, my advice would be to read the Google research papers that are coming out right now,” Olson said during a recent panel discussion alongside Wired.
‘if you want to know what the large-scale, high-performance data processing infrastructure of the future looks like, my advice would be to read the Google research papers that are coming out right now.’— Mike Olson
Since the rise of Hadoop, Google has published three particularly interesting papers on the infrastructure that underpins its massive web operation. One details Caffeine, the software platform that builds the index for Google’s web search engine. another shows off Pregel, a “graph database” designed to map the relationships between vast amounts of online information. But the most intriguing paper is the one that describes a tool called Dremel.
“if you had told me beforehand me what Dremel claims to do, I wouldn’t have believed you could build it,” says Armando Fox, a professor of computer science at the University of California, Berkeley who specializes in these sorts of data-center-sized software platforms.
Dremel is a way of analyzing information. Running across thousands of servers, it lets you “query” large amounts of data, such as a collection of web documents or a library of digital books or even the data describing millions of spam messages. This is akin to analyzing a traditional database using SQL, the Structured Query Language that has been widely used across the software world for decades. if you have a collection of digital books, for instance, you could run an ad hoc query that gives you a list of all the authors — or a list of all the authors who cover a particular subject.
“you have a SQL-like language that makes it very easy to formulate ad hoc queries or recurring queries — and you don’t have to do any programming. you just type the query into a command line,” says Urs Hölzle, the man who oversees the Google infrastructure.
The difference is that Dremel can handle web-sized amounts of data at blazing fast speed. According to Google’s paper, you can run queries on multiple petabytes — millions of gigabytes — in a matter of seconds.
Hadoop already provides tools for running SQL-like queries on large datasets. Sister projects such as Pig and Hive were built for this very reason. But with Hadoop, there’s lag time. It’s a “batch processing” platform. you give it a task. it takes a few minutes to run the task — or a few hours. And then you get the result. But Dremel was specifically designed for instant queries.
“Dremel can execute many queries over such data that would ordinarily require a sequence of MapReduce jobs, but at a fraction of the execution time,” reads Google’s Dremel paper. Hölzle says it can run a query on a petabyte of data in about three seconds.
According to Armando Fox, this is unprecedented. Hadoop is the centerpiece of the “Big Data” movement, a widespread effort to build tools that can analyze extremely large amounts of information. But with today’s Big Data tools, there’s often a drawback. you can’t quite analyze the data with the speed and precision you expect from traditional data analysis or “business intelligence” tools. But with Dremel, Fox says, you can.
“They managed to combine large-scale analytics with the ability to really drill down into the data, and they’ve done it in a way that I wouldn’t have thought was possible,” he says. “the size of the data and the speed with which you can comfortably explore the data is really impressive. People have done Big Data systems before, but before Dremel, no one had really done a system that was that big and that fast.
“Usually, you have to do one or the other. the more you do one, the more you have to give up on the other. But with Dremel, they did both.”
‘Before Dremel, no one had really done a system that was that big and that fast. Usually, you have to do one or the other. the more you do one, the more you have to give up on the other. But with Dremel, they did both.’— Armando Fox
According to Google’s paper, the platform has been used inside Google since 2006, with “thousands” of Googlers using it to analyze everything from the software crash reports for various Google services to the behavior of disks inside the company’s data centers. sometimes, the tool is used with tens of servers, sometime with thousands.
Despite Hadoop’s undoubted success, Cloudera’s Mike Olson says that the companies and developers who built the platform were rather slow off the blocks. And we’re seeing the same thing with Dremel. Google published the Dremel paper in 2010, but we’re still a long way from seeing the platform mimicked by developers outside the company. a team of Israeli engineers is building a clone they called OpenDremel, though one of these developers, David Gruzman, tells us that coding is only just beginning again after a long hiatus.
Mike Miller — an affiliate professor of particle physics at the University of Washington and the chief scientist of Cloudant, a company that’s tackling many of the same data problems Google has faced over the years — is amazed we haven’t seen some big-name venture capitalist fund a startup dedicated to reverse-engineering Dremel.
That said, you can use Dremel today — even if you’re not a Google engineer. Google now offers a Dremel web service it calls BigQuery. you can use the platform via an online API, or application programming interface. Basically, you upload your data to Google, and it lets you run queries on its internal infrastructure.
This is part of a growing number of cloud services offered by the company. first, it let you run build, run, and host entire applications atop its infrastructure using a service called Google App Engine, and now it offers various other utilities that run atop this same infrastructure, including BigQuery and the Google Compute Engine, which serves up instant access to virtual servers.
The rest of the world may lag behind Google. But Google is bringing itself to the rest of the world.
Hey, remember Symbian? You know, that god awful operating system Nokia used in its smartphones? well, just two years ago, it was still the top selling mobile OS in the world.
Things change fast in the smartphone business. Yesterday, Gartner published it latest estimates of global mobile sales, and it reminded me of two things. First, it’s really foolish to take anything for granted about an industry that reshuffles itself so quickly. back in 2010, when “California Gurls” was tearing up the charts (God forgive us), the Blackberry was still going toe-to-toe with the iPhone. second, when you look at sales volumes, the big story in mobile for the past few years hasn’t really been Apple; it’s been the rise of Google’s Android.
In the second quarter of 2009, worldwide smartphone sales totaled about 41 million, with Apple and Android devices accounting for 5.3 million about 755,000 units, respectively. This last quarter, consumers bought 153 million smartphones.* About 28.9 million had an Apple logo, while 98.5 million ran Google’s OS.
When we watch companies like Apple and Samsung throw down in vicious patent fights to keep each others’ products off of shelves, this is the chart we should keep in mind. Apple looks unbeatable today, but nobody can be certain if it’ll be able to maintain the fat profit margins on each device that make it such a powerful company. And who knows what will happen when that Amazon phone comes out. Smartphones are still a relatively young industry, and things change fast.
*An earlier version of this story incorrectly stated the figure as 140 million units.
It’s interesting how over the past few weeks the rumor mill talks about iPad Mini almost as much as about the iPhone 5. Some alleged iPad Mini cases have leaked on the web, courtesy of an iDevice accessories maker.
Rumor has it that the Apple is readying a smaller, cheaper tablet, unofficially dubbed iPad Mini, sporting a 7.85-inch, that will be a rival for Google’s Nexus 7 and Amazon’s Kindle fire.
While my inspiration tells me that there will be no iPad Mini after all, the whole web seems fascinated by the perspective of this product, especially be the idea of an Apple tablet that costs $250 or $300.
What you are seeing in the gallery below are a couple of iPad Mini cases, coming for Devicewear, a company Apple will probably never forget if the pictures are real. You can see for yourself the smaller dock connector on the bottom of the device an the way the speaker grills are designed, a reminiscent of iPhone 5, or what we call the iPhone 5 based on the recent leaks.
We’ve also spotted the volume buttons, the mic, a button that probably turns off screen rotation and a camera that doesn’t have LED flash. The lens of the camera mounted on the back of the alleged iPad Mini is pretty big, therefore it might use the same technology as iPhone 4S, even though the lack of LED flash will be a drawback.
Even so, these are only rumors, and without Apple’s confirmation we can’t be 100% sure that this is how the iPad Mini looks like, or that it even exists.
The mathematical insight that turned Google Inc. into a multibillion-dollar company has the potential to help the world avert the next financial crisis. If only banks made public the data required to do the job.
Sixteen years ago, the founders of Google — computer scientists Larry Page and Sergey Brin — introduced an algorithm to measure the “importance” of Web pages relative to any set of keywords.
Known as PageRank, it works on the notion that Web pages effectively vote for other pages by linking to them. The most important ones, Page and Brin reasoned, should be those drawing links from many other pages, especially from other really important ones.
If this definition sounds circular, it is. It also captures an authentic reality, which is why respecting it gives far superior results. Page and Brin’s breakthrough involved using mathematics to make it work. The required ideas don’t go much beyond high-school algebra, although it takes lots of computing power to make something as sprawling as the World Wide Web possible.
What could this have to do with finance? Quite a lot. The systemic risk that turned the U.S. subprime-lending crisis into a global disaster is circular, too. we can’t identify it simply by looking for the banks with the most assets or the biggest portfolios of risky loans. what matters is how many links a bank has to other institutions, how strong those links are and how risky those other banks are, not least because they too have links to other risky banks.
Something like PageRank might be just the right thing to cut through it. That’s the argument, at least, made by a team of European physicists and economists in a new study. their algorithm, DebtRank, seeks to measure the total economic value that would be destroyed if a bank became distressed or went into default. It does so by moving outward from the bank through the web of links in the financial system to estimate all the various consequences likely to accrue from one failure. Banks connected to more banks with high DebtRank scores would, naturally, have higher DebtRank scores themselves. (I have put a little of the technical detail on my blog.)
as a demonstration, the researchers calculated DebtRank on the basis of the known network of equity investments linking institutions — pretty much the best they can do with publicly available data. If Bank a owns stock in Bank B, the two are linked. This network, of course, reflects only a subset of the many links created by derivatives and other instruments, so the calculation is a little like working out the best driving route from new York to Los Angeles while ignoring two-thirds of all the roads. nevertheless, it’s useful for demonstrating what might be possible with more complete data.
The analysis offers some surprises. at the peak of the financial crisis, in November 2008, for example, DebtRank scores for the largest 20 or so banks show that simple bank size isn’t as important as we have come to think. Institutions such as Barclays Plc, Bank of America Corp., JPMorgan Chase & co. and Royal Bank of Scotland Group Plc presented more systemic risk than did Citigroup Inc. or Deutsche Bank AG, despite being significantly smaller in total assets. Wells Fargo & co. stands out even more: It presented as much systemic risk as Citigroup, despite having only a quarter of the assets.
an algorithm alone can’t save the world, and this isn’t the final word on the best way to measure systemic risk. yet the apparent superiority of the DebtRank approach underscores how our ability to monitor the financial system depends wholly on the availability of data. Currently, most of the information that would be needed to calculate DebtRank or any other similar measure is simply not public.
Imagine a world in which banks and other financial institutions were legally required to disclose absolutely all of their assets and liabilities to central banks, which would in turn make that information public on a website. Regulators — indeed, anyone — would then be able to see the whole network and assess a bank’s situation in full clarity. anyone so inclined could calculate measures such as DebtRank and assess how much any particular bank is contributing to potential financial instability.
With full transparency, it’s just possible that the core business of lenders would go back to assessing the creditworthiness of borrowers. they would need to do so to maintain a good reputation and to borrow themselves, as any risky loans they made would be known to all. In such a situation, the economist and physicist Stefan Thurner of Medical University of Vienna suggests, “financial institutions would only survive and prosper if they assess the risk of others better than their peers.”
that is a radical idea, so radical it is almost certainly a political nonstarter. But as the British physicist William Thomson, also known as Lord Kelvin, put it back in the 19th century: “What you cannot measure, you cannot hope to improve.” It’s a lasting piece of wisdom.
Mark Buchanan, a theoretical physicist and the author of “The Social Atom: why the Rich get Richer, Cheaters get Caught and Your Neighbor usually Looks like you,” is a Bloomberg View columnist. The opinions expressed are his own.
You can’t just write an article or blog post and just leave it. You’ve got to get it some attention. Prepare your content in a certain way, and you’ll have a better chance of ranking on Google, and other search engines.
Google is trying to provide value to their customers, and so should you. Google is looking for value and authority. You may think of their algorithms as just a mathematical way to rank sites, but they also include scientific measurements to identify what might be good content, who’s site might have an expert author.
You can pay big bucks for an expert to setup onpage SEO to optimize your websites and your posts, or you can do the following simple steps and get on page one yourself.
Onpage SEO – Become a part Time Geek
Here are 7 tips to help get you rank on Google, or any other Search Engine.
- Always have an image – find something that is eye catching and different, yet relates to your post.
- Keywords or keyword phrase must be in each title, or at a minimum one H1, one H2, and one H3 tag.
- What the heck are ‘H’ tags? they are html code that help identify the size and characteristics of the title and sub titles within your post. For example in WordPress’s visual editor, you don’t need to input actual code. You will see a drop down menu to choose Heading 1 (H1 tag), Heading 2 (H2 tag), and Heading 3 (H3 tag). this is over simplified, but likely all you need to know. Google looks for them.
- Keywords must be in the first and last sentence – your first sentence could be your H1 tag/title as I’ve done in this post – that’s what Google sees first.
- Keywords must be in the alt tag of the image. when you upload your photo to WordPress, you will see an area to type in your keywords or keyword phrases)
- Keyword density in your entire post should be about 1% - for example, in a 500 word blog post, you need your keyword phrase mentioned 5 times. You can include the times it is mentioned in the titles- Warning – don’t ‘stuff’ your posts with keywords. It won’t read well, and will look spammy. Google recognizes this tactic, and won’t consider ranking that type of post.
- Link to another page on your website or blog – for example link to an older story with a similar topic. If you are just starting out, build your blog with this in mind. You can always go back to a live post and add a sentence with a link to an a post or category on your site.
Do Not Skimp on Onpage SEO
These are the simple onpage SEO tactics you need to do to get on page 1 of Google, plain and simple.
- Do not take shortcuts
- Put the work in that’s required
- Don’t be lazy or you won’t get on page 1
This is how you can work with Google to help them find you, and your content, and give it a good ranking. Trust me, Page 1 is worth the effort.
About a year ago I wrote about the Chromebook laptop computers that run Google’s cloud-based Chrome operating system instead of Windows or Macintosh or Linux. these low-priced laptop computers have tiny hard drives as they store most all data and applications in the cloud. As long as the user has an Internet connection, these computers can perform nearly all the same functions as their more expensive cousins.Chromebook computers never get viruses and are very, very simple to use, even for computer novices. In fact, they have been called “laptops for the AARP generation” because of their simplicity of use. They are great for use by anyone who is nearly computer illiterate. Chromebook computers are popular with senior citizens, grammar school children, and anyone else who has never learned the intricacies of computers.