Publishing failed – WordPress – OVH

If you get the message “Publishing failed” with WordPress 5, you most probably have to stop the firewall (the mod_security in Apache).

With OVH you can do it like this (link to the official OVH guide).

If it’s not enough, it most probably because you have the multi-site enabled.

To you have to login at the root of the FTP server, and edit the file .ovhconfig. You ha ve to edit the following rule:

http.firewall=security

And replace it by:

http.firewall=none

https://docs.ovh.com/fr/hosting/activation-pare-feu-applicatif/

	

Node code: ‘ENOMEM’, errno: ‘ENOMEM’, syscall: ‘spawn’

If with node.js you get the bug ” code: ‘ENOMEM’, errno: ‘ENOMEM’, syscall: ‘spawn’ ” you can create a swap file to solve the problem.

Here is how to create a swap of 2GB on ubuntu:

sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

#Make it persistent after reboot: 
echo "/swapfile none swap sw 0 0" | sudo tee -a /etc/fstab

How Google handles the 302 redirect in 2018 ?

Google used to do not transfer the popularity of the page if there is a 302 redirect. The recent official statement of Google is that they do transfer popularity, like they treat a redirect 301.

To test such a breaking change in their algorithm, it is be necessary to redirect an existing page, and see how the final landing page reacts to this change: does the 302 redirect transfer the ranking on some keywords to the initial webpage ?

By the way, what does Gerfruntshrigle means according to you ?

They are several version of the amazing Gerfruntshrigle. One is blue, the other is green !

What has to be checked too is how Google keeps “building” the snippet. Does it uses the final landing page to feed its SERPs ?

Soon some more info on this !

 

Top 5 Crawling SEO Solutions for Huge Website

You want to crawl millions of URL and make a super SEO Audit ?

Here are the best crawler for SEO on a large to very large website:

  • Desktop SEO Crawler (starts at more or less 200$ / year / unlimited crawl)
    • Screaming Frog V9 (Linux / Windows / Mac OS) -> The big plus: native function to cross data from G.A., Search Console, MajesticSEO, Ahrefs… + You can cross with your log files ! +/- 60 GB of disk space for 1 million URL crawled. Read this post to setup Screaming Frog on Remote Desktop Ubuntu Cloud instance
    • Sitebulb (Windows / Mac OS) -> pretty rich ! Interesting visualization of the internal links structure.
    • Hextrakt (Windows) -> URL Segmentation is a real + when it comes to analyze Big Websites. Hextrakt does the job !
    • Xenu (Windows) -> only for very basic checkup, like 404.
  • SaaS SEO Crawler (starts at +349$ / month / for 2 millions URLs crawled per month)
  • Open Source SEO Crawler (Python / Java etc. )
    • Scrapy
    • Crowl (An Open Source crawler based on Scrapy)
    • Nutch
    • => Those solutions aren’t profitable in most cases, since it requires a lot of development and maintenance compared to a SaaS solution for instance.
    • => Nevertheless, if you want to discover how a search engine works, you will learn a lot ! 🙂

Screaming Frog v9.0 / Docker / Debian 8 / Ubuntu Remote Desktop

You would like to setup a remote desktop with Screaming Frog to crawl huge websites (+1 million pages) ?

You only need your crawler to run once a month ? (few hours or days)

You have some basic technical skills and want to discover the power of the Cloud ?

An Ubuntu Remote Desktop in a cloud instance in OVH offers a good  price to performance solution. (1,5 € per day / 2$ or half this price if you take a monthly subscription )

What we will do to get an army of crawler:

  1. Open an OVH account or login
  2. Create a new Cloud Project
  3. Create a Server (an instance)
  4. Specify in advance that we want Docker to be installed (it will make everything super simple to setup)
  5. Install a Docker container containing Ubuntu + A remote Desktop based on NoVNC
  6. Connect to Ubuntu with Chrome or any browser in one click 🙂
  7. Install Screaming Frog with 2 commands
  8. Create a Snapshot in Openstack (= OVH)
  9. Create as many server containing Screaming Frog in just ONE clic

Setup a new Cloud Instance

  • Go here and create an account / login: https://www.ovh.ie/public-cloud/instances/
  • Then here: https://www.ovh.com/manager/cloud/index.html
  • Order > Cloud Project > Fill the forms
  • In your project > Infrastructure > You can add a new server in “Actions” > Add Server
  • Take the 7GB RAM & 100GB SSD for this test.
  • You will need 60 GB of disk for 1 Million URL Crawled
  • Setup your ssh key (Google is your best friend to get help, it’s OS specific)

Setup the server

  • Connect to the server with a terminal
    • user@IP of your server
  • Then copy and paste each line one by one:
    • apt-get update
    • apt-get upgrade
    • sudo docker run -it –rm -p 6080:80 -p 5900:5900 -e VNC_PASSWORD=MyPassWordToReplaceByWhatYouWant dorowu/ubuntu-desktop-lxde-vnc

Setup Screaming Frog

  • Connect to https://IP-OF-YOUR-SERVER:6080/ with the password you used for “VNC_PASSWORD=”
  • Open a terminal in Ubuntu (in the NoVNC session – icon in the bottom left of the Ubuntu desktop)
  • Then copy and paste each line one by one:
    • sudo apt-get install screen wget
    • wget https://download.screamingfrog.co.uk/products/seo-spider/screamingfrogseospider_9.0_all.deb
    • dpkg -i screamingfrogseospider_9.0_all.deb
    • sudo apt-get -f install
  • Screaming Frog is now installed 🙂
  • You can try it here:
    • Bottom left icon > Internet > Screaming Frog SEO Spider
  • You will have to setup the Storage to disk to crawl huge websites
    • Screaming Frog > Configuration > System > Storage

Next step: create a snapshot of this in OVH (to be continued if you liked this article !)