403 error: What is it and how can you fix it? (2024)

403 errors are less common than other types of error, but they can happen. These occur when a client accesses a resource for which they have no authorization, meaning the request cannot be satisfied.

In this article, we explain what causes a 403 error, what consequences this can have, and what you can do about it.And if you want to learn more about the whole topic of status codes, check out our beginner-friendly guide.

What does an error code 403 mean?

The 403 error, also called 403 forbidden error, or HTTP 403 error code, is issued by a server if a client (browser) lacks the required access rights. Access is “forbidden” and the message “Error 403 – Forbidden” appears in the browser window.

The more detailed explanation:

If a client such as a browser wants to retrieve a URL from a server via http, the server first verifies this request. If the page exists and can be displayed, the server sends the status code 200 OK. The browser can then load the website and display it to the user. This “transaction” between the client and server usually goes unnoticed by users, unless errors occur.

The most common errors you encounter are 4xx errors – these belong to a class referred to as client errors. Error 403 is one of those. If a browser connects to a server via http, the server can deny access. In this case, the server will return the 403 forbidden error and the browser cannot access the desired resource.

403 error: What is it and how can you fix it? (1)

Figure 1: Notification from the server when attempting to access an admin page of a WordPress blog

Even if error code 403 initially suggests a client error, it is ultimately due to the server settings or the settings of the respective CMS, whether a client has access to certain directories or URLs or not.

Consequences of 403 errors for your website

If URLs cannot be displayed by browsers, they have no added value for website visitors. A 403 error inevitably leads to a negative user experience and significantly limits your website’s usability. As a result, your site may not be visited as often if 403 errors frequently occur.

For google, a 403 error is also a problem because the Googlebot cannot crawl the contents of the URLs in question and render them like a browser. There is therefore a risk that the pages might be removed from the Google index.

In 2014, Matt Cutts granted a grace period of 24 hours if the Googlebot had found a 403 page. According to Cutts, that’s the length of time the system allowed the URL to remain in the crawling system.

In a round of questions about SEO on Reddit, Google’s John Mueller also commented on the topic of 4xx errors. There, the tips became more specific:

403 error: What is it and how can you fix it? (2)

Figure 2: Statement by John Mueller on 4xx errors (Source)

So one thing is clear: If a URL does not deliver content for a client request, even a Googlebot request, it will be removed from the index.

Reasons for a 403 error

There are several reasons why a website might return a 403 error. In many cases, the access block is deliberately set and that often makes sense:

  • Exclusion from restricted login areas: websites can block ordinary users from accessing certain admin login areas. For example, access can be restricted to specific IP addresses or VPN access. As a rule, these URLs are not relevant for the standard user and cannot be accessed via the frontend. Only site administrators have access to these pages.

In addition to this meaningful restriction, users can be excluded if directories are unintentionally blocked. This can happen in the following cases:

  • You have created a new area on your website, which has not yet been completely edited and can only be accessed after logging in. However, you have inadvertently already set an internal link to this area via the menu.

  • The server admin has inadvertently blocked an entire directory for unauthorized users. This can be caused by an incorrect configuration or syntax of .htaccess file events.

  • The server has restricted the website reading privileges for all users and all areas.

403 errors can also occur for bots when they try to crawl your site. For example, if the Googlebot is not allowed to search important directories due to the defaults in robots.txt, which are important for the functionality of your website, this might result in this kind of error. Forbidden errors are also possible if you use the robots.txt to exclude central directories with content from being crawled.

How to find and fix 403 forbidden errors

Ryte can help you identify 4xx errors. The quickest way to find out about these errors is to click the “Critical Errors” report under Quality Assurance.

In addition, you can check the status codes of your website under Quality Assurance > “Status Codes”. Take note of when your project was last crawled.

403 error: What is it and how can you fix it? (3)

Figure 3: Check status codes of a website with Ryte Website Success

The Google Search Console (GSC) will also show you if there are 403 errors. You can find the corresponding report in the “Crawl Errors” section:

403 error: What is it and how can you fix it? (4)

Figure 4: Determining crawling errors with the GSC

What can I do about 403 errors?

If access to directories or URLs on your website is denied to clients, you should take action. First, check whether the robots.txt excludes important directories from being crawled. Ryte can help you with that. The “Robots.txt monitoring” report under Search Engine Optimisation shows you which areas may not currently be crawled. (Disallowed)

403 error: What is it and how can you fix it? (5)

Figure 5: Check robots.txt with Ryte

The Google Search Console is also suitable for checking robots.txt. You can find the report in the segment “Crawl” in the old version of the GSC. The robots.txt tester has not yet been integrated into the new user interface (as of July 2019).

403 error: What is it and how can you fix it? (6)

Figure 6: Test robots.txt with GSC

With the tool “Fetch as Google,” you can check whether the Googlebot is prevented from crawling important areas due to restrictions in the robots.txt.

Conclusion

403 errors are first and foremost client errors, but these errors are also caused by an incorrect configuration of the server or the robots.txt file. If you have 403 errors on your site, you should act quickly – otherwise Google will not index your URLs as they do not deliver content and are negative for the user experience.

403 error: What is it and how can you fix it? (2024)
Top Articles
Latest Posts
Article information

Author: Nathanael Baumbach

Last Updated:

Views: 6011

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Nathanael Baumbach

Birthday: 1998-12-02

Address: Apt. 829 751 Glover View, West Orlando, IN 22436

Phone: +901025288581

Job: Internal IT Coordinator

Hobby: Gunsmithing, Motor sports, Flying, Skiing, Hooping, Lego building, Ice skating

Introduction: My name is Nathanael Baumbach, I am a fantastic, nice, victorious, brave, healthy, cute, glorious person who loves writing and wants to share my knowledge and understanding with you.