• Grouping requests online. Query clustering. Manual method of semantic core distribution

    This is just the beginning of the work. Without clustering, the collected data, although useful, will not reach its full potential. Semantic core clustering refers to the grouping of search queries after analyzing search engine results. The process is quite labor-intensive (if done manually, I will tell you in more detail below), but absolutely necessary for most resources.

    For many sites, it is important to separate informational requests from commercial ones. For example, queries like “product name” and “buy product name” will always have different search results, since the first is informational, and the second is commercial. From a practical point of view, this means that promoting them on one page will be an extremely difficult task, so they are grouped, after which several pages are made for each cluster.

    Above I indicated a fairly simple example; any person without special knowledge will be able to separate requests with the word “buy” from all others without looking at the search results, but in practice, more complex options often come across, where full clustering with analysis of the search results is necessary.

    If we talk point by point, clustering of the semantic core is needed for:

    • effective promotion of all search queries;
    • drawing up the correct technical specifications for copywriters (I’ll talk about this below);
    • saving money. With good clustering and high-quality content, most queries will rank at the top without additional steps on the part of the optimizer (buying links, etc.).

    I note that there are different types of semantic cores; clustering is only necessary for the content plan, but for removing positions or for contextual advertising this is not required.

    Manual clustering of the semantic core

    Here ordinary Excel will help you, in which you need to group key phrases. In some cases, you don’t even need to study search results; all queries can be distributed among clusters without any difficulty. It is worth mentioning online services that facilitate such work.

    An example of keyword grouping is here:

    • The first column contains the group's serial number;
    • In the second there is a keyword;
    • Thirdly, frequency;
    • In the fourth - the total frequency of the group (important for prioritization)
    • The fifth is the number of words in the group.

    Kg.ppc-panel.ru

    I didn’t talk about other services; there are actually a lot of them. Many people are implementing such tools today, but which one to choose is a personal matter. I prefer to use highly specialized products, so I find it more convenient to work with KeyAssort. But for some, a service that will also check positions, collect keys, etc. will be more suitable.

    If you have a large project with a lot of key phrases, then doing it without clustering the semantic core will be a very big mistake. Simply because your competitors will definitely do it. Additionally, if you already have a running site, you can still do clustering for it. This will help you identify keywords you missed and reassess the quality of your content. Sometimes it's enough to just write one article or create a separate section, rather than buying links trying to promote a search query that ended up in an unsuccessful cluster.

    We've released a new book, Social Media Content Marketing: How to Get Inside Your Followers' Heads and Make Them Fall in Love with Your Brand.

    What is semantic clustering?

    The service operates online and allows you to cluster keys based on search engine results. In fact, grouping is only one of the service’s capabilities, but now let’s talk about it.

    We create a new project, in which we indicate its name, select the country, region, etc.

    We set the accuracy and indicate with what frequency the service will have to work.

    Click to create a project. In the window that appears, we will see “control information”, which contains the cost of our project.

    You can also get acquainted with the cost by simply clicking on the price tab.

    After clustering, the program uploads an Excel document with ungrouped key phrases.

    We go over what we got and finalize it, because... After all, the machine works and errors are possible.

    • work takes place online;
    • all projects we worked with are saved;
    • costs money;
    • the final price is expensive;
    • you still have to go through everything manually.

    Surely many have heard about this program, and some have worked in it, collecting frequencies. Grouping keys is just a small part of what this utility can do.

    You can group requests by phrase composition, based on the issuance of the PS. Search-based grouping only works when the KEI value is collected. It takes an average of 2 minutes to do everything.

    • intuitive interface;
    • ability to customize grouping;
    • a huge number of options for working with semantics;
    • relatively low price of the product;

    • must be installed on a PC;
    • You cannot edit received groups in the utility itself - only in the Excel dock;
    • clusters must be manually adjusted.

    A well-known SEO platform with a tool for automatic clustering. What sets it apart from its competitors is that it breaks through the top 30 search results in real time for each phrase added to clustering and creates groups of semantically related phrases based on how many sites use this phrase on their site. The more sites from the top 30 have the same phrases, the higher the connection between them - then the service will add them to the cluster. A separate section has been written about the mechanism of operation. video developers.
    And if the technical part of clustering is complex, then setting up the project is easy for the user.

    There are 4 stages in total. First, you need to set the name of the project. The second step is to import key phrases for clustering; you can add them manually via Ctrl+V or import a file. The third stage is the selection of a clustering region. The service allows you to select a region up to a city, which is important for local SEO. And in the final fourth step - setting the type and strength of clustering.




    In the fourth step, you can leave the default clustering mode. If you don’t like the result, you can simply change the project settings and re-enter the same keywords with different parameters for free.


    The result is exported to XLS, XLSX, CSV files and looks like this: - one of the most important stages of working with the project. This procedure allows you to correctly configure the site at the initial stages of resource development and lead it in the right direction in the future. With it, you can avoid unnecessary rework and revisions of the resource after several months of promotion.

    On my own behalf, I can add that grouping using services is good and convenient. But in order to be 100% sure of the result, in any case, it is necessary to do the final stage of clustering the semantics manually.

    This is a grouping of keywords that are simply a list, dividing them into clusters (groups). This is what turns a thousand of your queries into a complete structure, divided into categories, pages, articles, etc. Without the correct breakdown, you will waste a lot of money and time “idle”, since some requests cannot be “landed” on one page. Or vice versa, keywords require that these queries be on the same URL.

    When collecting a semantic core (SN), I usually do clustering by hand, using , here are links on the topic:

    But all this is easy and simple when we have clear groups of queries with different logical meanings. We know very well that for the query “Stroller for twins” and “Stroller for boy” there must be different landing pages.

    But there are requests that are not clearly separated from each other and it is difficult to “feel” determine which requests should be placed on one page, and which requests should be scattered across different landing URLs.

    One of the participants in my SEO marathon asked me a question: “Petya, what to do with these keys: put everything on one page, create several, if so, how many?” And here is an excerpt from the list of keywords:

    The word “java” alone is used in three variations (“java”, “java”), plus to all this, people are looking for it for different games, devices, etc. There are a lot of requests there and it’s really hard to understand what’s the best thing to do.

    What do you think is correct? Right. The best approach is to analyze competitors who are already in the TOP for these keywords. Today I will tell you how you can cluster the semantic core based on data from competitors.

    If you already have a ready-made list of keywords for clustering, you can immediately move on to point 4.

    1. Query matrix

    Let me take another example: I have one client with an online store of electrical and lighting equipment. The store has a very large number of products (several tens of thousands).

    Of course, any store has products that are the highest priority for sale. These products may have high margins, or you simply need to get rid of this product from the warehouse. So, I received a letter, something like this: “Petya, here is a list of products that are interesting to us.” And there the list was listed:

    • switches;
    • lamps;
    • lamps;
    • spotlights;
    • extension cords;
    • and a few more points.

    I asked to create a so-called “query matrix”. Since the store owners know their product range better than me, I needed to collect all the products and the main characteristics/differences of each product.

    It turned out something like this:

    When compiling the matrix, do not forget that some English-language brands are also requested in Russian; this must be taken into account and added.

    Of course, if the product had other characteristics, a column was added. This could be “Color”, “Material”, etc.

    And such work was done for the highest priority goods.

    2. Multiplying queries

    There are many services and programs for multiplying queries. I used this key phrase generator http://key-cleaner.ru/KeyGenerator, we enter all our queries there in columns:

    The service multiplied all sorts of options with the word extension cord. Important: many generators multiply only consecutive columns, that is, 1 column with the second, then the first two with the third, etc. And this one multiplies everything from the first column with others: the first with the second, then the first with the third, fourth; then first*second*third, first*second*fourth, etc. That is, we get the maximum number of phrases containing the main word in the first column (this is the so-called marker).

    Marker- this is the main phrase from which you need to generate a key. Without a marker, it is impossible to create an adequate key query. We don't need the phrases "IEC wholesale" or "buy on reel".

    When multiplying, it is important that each key phrase has this marker. In our example, this is the phrase “extension cord”. As a result, 1439 (!) unique key phrases were generated in this example:

    3. Clearing requests from "garbage"

    Now there are 2 options for the development of events. You can start clustering all these requests and create a huge number of generated pages for each cluster, if your site’s system allows this. Of course, each page should have its own unique meta tags, h1, etc. Yes, and sometimes it’s problematic to put these types of pages into the index.

    We didn’t have such a possibility technically, so we didn’t even consider this option. It was necessary to create only the most necessary new landing pages in a “semi-manual” mode.

    What type of frequency should I work with? Since our list of products + intersections were not very popular (narrowly targeted), I focused on frequencies with quotes(without exclamation marks) - that is, in various word forms. These are key phrases in different cases, number, gender, declension. It is this indicator that allows us to more or less estimate the traffic that we can receive from Yandex if we get into the TOP.

    In Key Collector we remove the frequencies in quotation marks for these phrases (of course, if you have a seasonal product, then you need to remove the frequencies in the “season”):

    And we delete everything that is equal to zero. If you have a more popular topic and a lot of words with non-zero frequency, you can increase the lower threshold to 5, or even higher. I have only 43 non-zero queries out of 1439 phrases for the Moscow region and the region.

    I transfer these 43 phrases with frequency data into Excel:

    4. Query clustering

    I do all this in Rush Analytics, here is the clustering algorithm in this service:

    For each request, the TOP-10 URLs for a given region are “pulled out” from the search results. Next, clustering occurs using common URLs. You can set the clustering accuracy yourself (from 3 to 8 common urls).

    Let's say we set the accuracy to 3. The system remembers the URLs of pages that are in the TOP 10 for the first request. If the second request from the list in the TOP 10 contains the same 3 URLs that the first had, then these two requests will fall into 1 cluster. The number of shared URLs depends on the precision we specify. And such processing occurs with every request. As a result, keywords are divided into clusters.

    1. Go to RushAnalytics -> Clustering, create a new project (upon registration, everyone receives 200 rubles in their account for testing, convenient):
    2. We choose a priority search engine for us and a region:

    3. Select the clustering type. In this case I choose "Wordstat". The "Manual Tokens" method does not work for me, since there is only one "extender" token in the requests. If you are loading several different types of products at once (for example, an extension cord, a light bulb, etc.), then it is better for you to select the “Wordstat + manual markers” type and specify the markers (the markers will need to be marked with the number 1 in the second column, and not markers number 0, frequency will go to the third column). The markers will be the most basic queries that are not logically connected with each other (the query “extension cord” and “light bulb” cannot fit on one page). In my case, I work step by step with each product and created separate campaigns for convenience. You also select the clustering accuracy. If you don’t yet know which method to choose, you can check everything (this will not affect the price in any way), and then, after receiving the result, you can choose the option that best clustered your queries. From experience, I will say that the most suitable in all topics is accuracy = 5. If you are doing clustering for an existing site, I recommend that you enter the URL of your site (if your site is in the TOP 10 for the request, then your URL will highlight in green received file):

    4. In the next step, upload the file to the system. You can also configure stop words, but I had a file without them, so this function is not needed in this example. The price of clustering is 50-30 kopecks per 1 request (depending on the volume):
    5. You will need to wait a little while the Rush Analytics service does its job. Enter the completed project. Already there you can view the clusters based on the clustering accuracy (the beginning of a new cluster and its name are highlighted in bold):
    6. Again, it is best to use precision 5 for clustering. It fits most often.
    7. Also in the next tab you can see a list of non-clustered words:

      Why didn't they cluster, you ask? Most likely, the results for these queries are not of very high quality and it was impossible to automatically assign these queries to any cluster. What to do with them? You can cluster manually and create separate landing pages according to logic, if possible. You can even create a separate cluster for one request and “plant” it on a separate page. Or you can expand the list of words and re-cluster in the Rush Analytics service.
    8. In the "Subject Leaders" tab you can see the TOP domains for these queries:

    9. By the way, in some queries you can see thumbs up like this, highlighted in green:
      This means that according to these requests, you already have a landing page for this cluster in the TOP 10 and you need to work on it.
    10. You can download this whole thing to your computer in Excel and work in this document. I work with precision 5, so I download this file:

    11. The Excel document contains the same information. The beginning of each cluster and its name are highlighted in gray (click on the image to enlarge):

    12. In addition to the names of the clusters, here you will see their sizes, frequencies, total frequencies, Top URL, relevant URL and highlights, which is very necessary when working on a landing page. Here they are:

      Please note that the “Universal” brand (via “U”) is also highlighted, and I didn’t even suspect that this brand could be registered in this way. In the highlights you will also see synonyms and thematic phrases that are highly desirable to use on landing pages to achieve the TOP.

    Conclusion

    What's next? What will this clustering give us? Now for each cluster on our website there should be a separate, and most importantly relevant url. The promotion of these pages is completely in our hands and we promote them further as best we can (content optimization, internal linking, external optimization, social factors, etc.).

    If we did the wrong clustering, then it would be difficult to advance a lot of requests. This would be an "anchor" that would hold us back, even though we would spend a ton of money promoting these pages.

    Correct clustering will help you save a lot and make it much easier to get into the coveted TOP.

    What do you think about this? How do you cluster semantic core queries?

    And clustering of key search queries. Grouping errors will cost you valuable time, money and other problems. In this article I want to tell you the main principles and rules of grouping, as well as show examples of services and programs.

    Keyword Clustering

    I highlight 2 main points when grouping:

    1. requests must fit each other in a logical sense
    2. requests must show the same results in Yandex

    From a logical point of view, everything is clear - you cannot put the keys “buy a phone” and “car painting in Omsk” on one page. One way or another, the requests must fit each other in meaning. If we have a page about finishing ceilings in an apartment, then all requests should be about finishing ceilings.

    With the issuance check, everything is not so clear. In general, the essence is the following - we enter queries into Yandex in “incognito” mode, select the region of promotion and see how much the search results overlap.

    Let’s say there are 2 requests “finishing ceilings in an apartment” and “finishing ceilings in a bathroom,” you need to understand whether these keys will fit on one page or not. Open 2 windows in Yandex and enter these queries.

    It is immediately clear that in the first case it is clearly stated about finishing the ceilings in the apartment, and in the second case - in the bathroom. This means that the requests lead to different pages and cannot be combined.

    Here’s another example: the phrases “buy heating batteries” and “buy heating radiators”. It seems that the requests are different, but let's see the results.

    As you can see, the output is the same - both batteries and radiators are present. Therefore, these 2 requests can be safely placed on one page.

    Programs and services for keyword clustering

    Clustering the semantic core in Excel is quite simple - you put all the queries into the program and start grouping them manually. You use the principle of grouping as I wrote above. That is, first we group by meaning, then we check the Yandex output.

    But, by the way, it happens that the results are “cloudy” for two or more queries, and it is not clear where to place them together or separately. This means that the competition here is small and the search results are not clearly formed, which means it won’t be a mistake to place queries together or on different pages, as is more convenient for you.

    Here is an example of semantic core clustering in Excel.

    I often use this method myself; if the topic is not complex and there are not many keywords, 100-200 keywords are quite suitable.

    Watch the video on how to cluster a kernel in Excel.

    You can also use the free online manual clustering service kg.ppc-panel.ru as an alternative to Excel.

    Automatic clustering

    If the semantic core is very large, then I use the service for automatic clustering of search queries seopult.ru. This is a VERY cheap service compared to analogues.

    Its only drawback is that the grouping is not entirely accurate, since you still need to eventually revise the clustering and correct shortcomings manually.

    Although, I think that there is NOT ONE service that would do 100% correct grouping. Even companies that only deal with collecting and clustering semantics still manually check and edit the final result.

    Here's a quick overview on setting up the project.

    The service will calculate how much kernel clustering costs and offer to launch the project. This is the paid grouping option that I use, and it suits me quite well.

    Here is a detailed video on how to use the tool:

    Clustering requests in key collector

    This method is also quite widely used, but as elsewhere, it still needs to be modified manually.

    Load the semantic core into the program and select the promotion region.