Category: Development

  • Image, Favicon & Icon size checker online




    Introduction

    Discover the exact dimensions of your image or ICO or favicon images with our easy-to-use online tool. Whether you provide a direct URL, a website URL, or upload a file, our tool quickly analyzes and displays the size of your image or icon in pixels.

    Key Features

    • Multiple Input Methods: Analyze Image ICO dimensions from a direct URL, a website URL, or by uploading a file.
    • Works with all types of images (BMP, GIF, JPEG, PNG, WebP, SVG, and AVIF)
    • Instant Results: Get the dimensions of your image and icon displayed immediately after submission.
    • User-Friendly Interface: Simple, clean design for easy use.
    • Accurate Measurements: Ensure your image and ICO and favicon meet the required size specifications.

    Why Use Our Tool?

    Image, icons and favicons play a crucial role in web design, enhancing the visual appeal and brand recognition of your website. Using our tool, you can ensure your icons are correctly sized, improving site performance and user experience.

    How to Use

    1. Enter image or ICO URL: Paste the direct link to your ICO file.
    2. OR enter Website URL: Provide the URL of a website to fetch its favicon.
    3. OR upload an image or ICO File: Select and upload an ICO file from your device.

    Click “Analyze” to get the dimensions of your image or icon instantly.

    Conclusion

    Keep your website looking professional with perfectly sized image, icons and favicons. Use our Image and ICO Dimension Analyzer tool to check and optimize your icon sizes with ease. Try it now and see the difference!

  • XPath: Compter le nombre de mots

    Pour compter le nombre de mots à l’aide d’une XPAth, nous allons utiliser trois fonctions XPath:

    • string-length: compte les caractères.
    • normalize-space: supprime les espaces de début et de fin d’une chaîne et remplace les successions d’espaces par une seule puis retourne la chaîne qui en résulte.
    • translate: va remplacer les caractères espace

    L’expression XPath effectue les opérations suivantes:

    1. Supprimer les espaces inutiles en trop dans l’article.
    2. Compter le nombre de caractères
    3. Supprimer tous les espaces trouvés dans l’articles
    4. Compter le nombre de caractères
    5. Soustraire les deux résultats
    6. Ajouter “1” au résultat de la soustraction

    Voici l’expression Xpath pour compter le nombre de mots:

    string-length(normalize-space(//*[@id="content"])) - string-length(translate(normalize-space(//*[@id="content"]),' ','')) +1
    

    Il faudra simplement remplacer //*[@id="content"] par votre XPath 🙂

  • How to increase maximum upload file size in LiteSpeed/WordPress/Ubuntu 20

    Unlike Apache, LiteSpeed requires to restart each service interacting with it. PHP isn’t excluded of this rule, so you must restart it too after changing the php.ini file.

    1/ Edit the following line (here for php8):

    /usr/local/lsws/lsphp80/etc/php/8.0/litespeed/php.ini

    For instance with nano:

    nano /usr/local/lsws/lsphp80/etc/php/8.0/litespeed/php.ini

    2/ Change upload_max_filesize & post_max_size (you might consider increasing max_execution_time & max_input_time too)

    3/ Restart BOTH LiteSpeed & WordPress:

    service lsws restart && killall lsphp

    And… That’s it !

    Here is what ChatGPT knows about openlitespeed and upload_max_filesize:

    OpenLiteSpeed is a high-performance open source web server software that can be used to serve web content and applications. The “upload_max_filesize” setting in OpenLiteSpeed is a PHP directive that determines the maximum size of a file that can be uploaded to a website.

    To change the “upload_max_filesize” setting in OpenLiteSpeed, you need to modify the php.ini file on your server. This can be done by locating the php.ini file, opening it in a text editor, and changing the value of “upload_max_filesize” to the desired size. After making the change, you’ll need to restart OpenLiteSpeed to ensure that the new setting takes effect.

    It’s important to note that changing the “upload_max_filesize” setting can have implications for server performance and security, so it should be done with care. Large file uploads can consume a lot of server resources and may leave the server vulnerable to attacks, so it’s recommended to set a reasonable limit that meets the needs of your website or application while also protecting your server.

    => ChatGPT knows the solution, again ! 🙂

  • Add column with domain name to CSV file with Python

    A script to add a column containing only the domain name to an existing CSV file. It extract it from a column containing an URL.

    It works with .co.uk and other country code top-level domain.

    Just change “5” by the column containing the URL.

    Also don’t forget to adjust. Here is it setup for semi-colon for input and output. Just change the delimiter by the one you need.

    import csv
    import tldextract
    
    with open('input.csv','r') as csvinput:
        with open('output.csv', 'w') as csvoutput:
            writer = csv.writer(csvoutput, delimiter=';')
            reader = csv.reader(csvinput, delimiter=';')
    
            all = []
            row = next(reader)
            row.append('domain_name')
            all.append(row)
    
            for row in reader:
                #Column of URL is #5
                ext = tldextract.extract(row[5])
                row.append(ext.registered_domain)
                all.append(row)
    
            writer.writerows(all)

  • tldextract: The Best Python Library for Domain Name Extraction

    Why Use tldextract?

    When working with URLs, extracting key components like subdomains, main domains, and suffixes is essential. Whether you’re developing a spam filter, analyzing web traffic, or managing SEO projects, tldextract provides a reliable and ready-to-use solution.

    How Does tldextract Work?

    tldextract uses the Public Suffix List to accurately identify domains and their components. This ensures it remains up-to-date with the latest suffix changes.

    Basic Example

    Here’s a simple example of extracting the registered domain:

    
    import tldextract
    
    ext = tldextract.extract('http://forums.bbc.com')
    print(ext.registered_domain)  # Output: bbc.co.uk
        

    Decomposing a URL

    You can also extract individual components like subdomains, domains, and suffixes:

    
    import tldextract
    
    url = "https://blog.data-science.example.co.uk/path"
    ext = tldextract.extract(url)
    
    print("Subdomain:", ext.subdomain)  # Output: blog.data-science
    print("Domain:", ext.domain)        # Output: example
    print("Suffix:", ext.suffix)        # Output: co.uk
        

    Advantages of tldextract

    • Accuracy: Handles complex domain structures and non-standard URLs.
    • Automatic Updates: Keeps track of changes in the Public Suffix List.
    • Ease of Use: Provides a straightforward API for extracting domain components.

    Use Cases

    tldextract can be applied to various real-world scenarios, including:

    • Filtering and whitelisting domains for security applications.
    • Analyzing web server logs to determine top-level traffic sources.
    • Building SEO tools to classify domains and subdomains.
    • Detecting malicious domains in anti-phishing systems.

    Filtering by Registered Domain

    You can easily filter URLs by registered domain:

    
    import tldextract
    
    url = "https://malicious.example.com"
    ext = tldextract.extract(url)
    
    if ext.registered_domain == "example.com":
        print("URL matches the target domain.")
        

    Optimizing Performance

    If you process a large number of URLs, configure tldextract to use a local copy of the Public Suffix List to avoid network delays:

    
    import tldextract
    
    extractor = tldextract.TLDExtract(cache_dir='/path/to/local/cache', suffix_list_urls=None)
    ext = extractor("https://example.co.uk")
    print(ext.registered_domain)
        

    Learn More

    For more details, visit the official tldextract GitHub repository.

  • Finding the LCP node with Chrome DevTools

    Official documentation about Largest Contentful Paint is super interesting and explicit. But it misses one thing: How to identify the largest node / block / image /text ?

    Chrome DevTools allows you to find which node you should optimize. Simply follow the steps below.

    • Open Chrome
    • Open the page you want to find the LCP block on
    • Open Chrome DevTools
      • If you are on a PC, type Ctrl+Shift+C
      • If you are on a Mac, just type the following hieroglyph:
    The shortcut to open Chrome DevTools
    • Follow the steps recorded in this video or follow steps written below the video if you need more details.
    • Open the tab “Performance”, between “Network” and “Memory”
    The horizontal menu with the Performance tab
    • Two options to record all the events and other data
      • Option #1 – Record and manually reload and stop
        • Click on the Record icon and reload manually the page by clicking on the icon next to the URL, then come back in DevTools and click on the blue button “stop”
      • Option #2 – Automatic record, reload and stop
        • Click on the Reload icon
    Three buttons: Record, Record->Reload->Stop and Clear results

    Which option choosing ?

    It depends… Sometimes you will have to interact with the page to get the LCP. In this case you should choose to record manually.
    • The LCP tag should be in the Timings row. This section is below Frames and Interactions. If you don’t see it immediately, try by scrolling below Interactions.
    • Click on LCP, it will show you which element is considered as the LCP
    • Click on the related node (if present), it will send you directly to the node in the source code of the page

    I hope it is clear enough. If you have any question, feel free to contact me on Twitter.

  • SSH – Could not open a connection to your authentication agent

    If you get “Could not open a connection to your authentication agent“, try:

    eval `ssh-agent -s` | ssh-add ~/.ssh/you_id_rsa_private_key

    => It works on Ubuntu 16, 18 and Debian

    Still a bug ?
    Try:
    ssh -vvvT git@gitlab.com

  • How-to install mnoGoSearch on Debian Jessie

    mnoGoSearch tested with a fresh cloud.runabove.com Debian Jessie 7.5 instance :

    sudo nano /etc/apt/sources.list
    #Only jessie distrib
    ---> deb http://ftp.debian.org/debian jessie main
    ---> deb-src http://ftp.debian.org/debian jessie main
    
    #Update package list
    sudo apt-get update
    
    #Upgrade distrib with new distrib repository
    sudo apt-get upgrade
    
    #Install MySQL & PhpMyAdmin BEFORE mnoGoSearch
    sudo apt-get install mysql-server phpmyadmin
    
    #Create a new db
    ---> create new db db_test_mnogo at http://IP/phpmyadmin/
    
    #Go in the /tmp/ directory for instance
    cd /tmp/
    
    #Download mnogoSearch package / Here it's not the last one
    wget http://www.mnogosearch.org/Download/deb/mnogosearch_3.3.13-1.static_amd64.deb
    
    #Unpack and install
    sudo dpkg -i mnogosearch_3.3.13-1.static_amd64.deb
    
    #Go to the newly created mnoGoSearch directory
    cd /etc/mnogosearch
    
    #Backup and rename the conf file
    sudo cp indexer.conf-dist indexer.conf
    
    #Setup mnoGoSearch to work with MySQL
    sudo nano indexer.conf
    ---> replace DBAddr  mysql://root:passmysql@localhost/db_test_mnogo/?dbmode=blob
    ---> add Server http://www.website-i-want-to-crawl.com/ near the end of the file
    
    #Go to the exe file directory
    cd /usr/sbin/mnogosearch
    
    #Create DB
    ./indexer -Ecreate
    #run
    ./indexer
    Enjoy :)

    Official doc : http://www.mnogosearch.org/doc33/msearch-indexing.html

  • Installation Symfony2, Solarium & NelmioSolariumBundle

    Nelmio Solarium Bundle permet de connecter Solarium à Symfony2. Ce bundle est une initiative de Nelmio. Solarium est une librairie PHP qui permet de communiquer avec PHP. Cette petite liste de commandes est davantage une prise de note qu’un tutorial, afin de gagner du temps en cas de futures réinstallations. (more…)