http://www.blackhatworld.com/blackhat-seo/black-hat-seo/544672-tut-find-endless-high-google-page-ranking-domains-scrapebox.html

While there are literally a gazillion "how to" scrapebox threads out there, I haven't seen one quite like this so I thought it might be worth sharing. Basically, what I show you is how to find high pagerank domains with a simple scrapebox search.

You can then take these domains and run them against your usual footprints to see if any of these domains are open for posting/backlinking.

Basically, all you do is take any domain with pagerank (preferably 5 or higher) and make use of a powerful but not often discussed Bing search operator called "linkfromdomain" What the "linkfromdomain" operator tells you is which domains the domain your are checking links to.

So if you went to Bing and typed in "linkfromdomain:apple.com", you would get back a list of sites that Apple.com links to! How does this help you find high PR domains? It's quite simple, the theory holds that high pagerank sites typically link to other high pagerank/authority sites. So by getting a list of sites a high PR domain links to typically leads to a list of a lot of other high PRwebsites. 

If you start with a good seed list of domains and run them through scrapebox you'll end up with a nice big list of high PR domains.

Instead of more talk, here are the steps you can using this method:


1) Build a seed list of high PR domains. This will be the most labor intensive part of the process if you don't already have a list of high pagerank domains. Fortunately, there is a a site that lists a bunch of these for you. It will take some copy/pasting to get the domains but there are a ton of them to use.

Here is the link:
http://www.statscrop.com/websites/pagerank/

They break down the domains by pagerank. I prefer to use domains with PR of 4-7. If you go higher you get the uber authoritative/brand domains that will have very few opportunities to acquire backlinks from (there will be some just few and far between) and if the PR is too low you're going to get a lot of junk domains to work with because they have already been spammed to death :-)







2) Once you have the domains you want to check, just paste them into scapebox and add "linkfromdomain:" in the custom footprint section. The final tweak I like to make is to select Yahoo instead of Bing to scrape from. Just about every search operator that works with Bing works with Yahoo since they are powered by Bing. Yahoo seems to have a much higher tolerance to scraping so this will eek out as much scraping per proxy as possible.

3) Start harvesting!

4) Once harvesting is done, remove duplicate domains, and run the domains you've collected through the PR checker in scrapebox and eliminate any domains that don't reach your PR cutoff level (it's up to you to decide).

5) You'll now have a list of high PR domains you can use your usual Scrapebox footprints with to look for backlinking opportunities. A footprint could be as simple as entering a bunch of domains as "site:domain.com" and then using the default option to look for Wordpress blogs to post comments on. 

The point is that all of the domains you end up with are high PR domains so you know if you make those domains the basis of your footprints, linking opportunities you uncover will be coming from high PR domains. In which case you will find actual pages you can get links from with pagerank as well.

Hope this helps someone out!