Blog

  • blog-project

    Getting Started with Create React App

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Visit original content creator repository
    https://github.com/marianabalanchuk/blog-project

  • sort-deepsort-yolov3-ROS

    Visit original content creator repository
    https://github.com/germal/sort-deepsort-yolov3-ROS

  • sparse-scene-flow

    Sparse Scene Flow

    This repository contains code for sparse scene flow estimation using stereo cameras, proposed by P. Lenz etal.: Sparse Scene Flow Segmentation for Moving Object Detection in Urban Environments, Intelligent Vehicles Symposium (IV), 2011. This method can be used as a component in your visual object tracking / 3D reconstruction / SLAM applications as an alternative to dense (and typically expensive to compute) scene flow methods.

    Note: The repository contains scene flow estimator only, there is no implementation for scene flow clustering or object tracking provided in this repository.

    Alt text

    If you want to know what is the difference between scene and optical flow, see this quora thread.

    Demo Video

    Click here to watch the video.

    Prerequisite

    In order to run the code, your setup has to meet the following minimum requirements (tested versions in parentheses. Other versions might work, too):

    • GCC 4.8.4
      • Eigen (3.x)
      • pybind11

    Install

    Compiling the code using CMake

    1. mkdir build
    2. cmake ..
    3. make all

    Running the sparse flow app

    1. Download KITTI
    2. See python/python_example.py to see how to use visual odometry estimator

    Remarks

    • External libraries

    • For optimal performance, run the sf-estimator in release mode.

    UPDATE (Jan’20): I added bindings for python and removed most of the “old” exmaple code in order to shrink the dependencies to the minimum. See the python example.

    If you have any issues or questions about the code, please contact me https://www.vision.rwth-aachen.de/person/13/

    Citing

    If you find this code useful in your research, you should cite:

    @inproceedings{Lenz2011IV,
      author = {Philip Lenz and Julius Ziegler and Andreas Geiger and Martin Roser},
      title = {Sparse Scene Flow Segmentation for Moving Object Detection in Urban Environments},
      booktitle = {Intelligent Vehicles Symposium (IV)},
      year = {2011}
    }
    

    License

    GNU General Public License (http://www.gnu.org/licenses/gpl.html)

    Copyright (c) 2017 Aljosa Osep Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    Visit original content creator repository https://github.com/aljosaosep/sparse-scene-flow
  • Kemono-Converter

    Conversor de M3U8 a MP4 desde Kemono

    Descripción

    Este script descarga archivos M3U8 desde Kemono, extrae los enlaces M3U8 para videos con resolución 1920×1080, y los convierte a MP4.

    Uso

    1. Ejecuta el script.
    2. Introduce el enlace del post en Kemono cuando se solicite.
    3. Elige un nombre para el archivo TXT que se descargará.
    4. El script descargará el archivo TXT y extraerá los enlaces M3U8.
    5. Los enlaces M3U8 se guardarán en un archivo separado en la carpeta ‘enlaces_m3u8’.
    6. Los archivos M3U8 se convertirán a MP4 y se moverán a la carpeta ‘videos’.

    Requisitos

    • Python 3.x
    • FFmpeg

    Instrucciones de Instalación

    1. Clona este repositorio en tu máquina local.
    2. Instala Python 3 si aún no lo tienes.
    3. Instala VLC Media Player desde su sitio web oficial.

    Notas

    • Asegúrate de tener una conexión a internet estable para descargar los archivos necesarios.
    • Los archivos M3U8 deben contener enlaces con resolución 1920×1080 para ser procesados correctamente.

    Autor

    Este script fue creado por Ignacio. Puedes contactarme en mi perfil de GitHub.

    Fecha

    Este README fue creado el 12 de marzo de 2024.

    Visit original content creator repository
    https://github.com/Ignaadex/Kemono-Converter

  • Cognitive-Speech-STT-ServiceLibrary

    Visit original content creator repository
    https://github.com/WinstonMoh/Cognitive-Speech-STT-ServiceLibrary

  • Twitter

    Twitter

    composer require socialiteproviders/twitter

    Installation & Basic Usage

    Please see the Base Installation Guide, then follow the provider specific instructions below.

    Add configuration to config/services.php

    'twitter' => [    
      'client_id' => env('TWITTER_CLIENT_ID'),  
      'client_secret' => env('TWITTER_CLIENT_SECRET'),  
      'redirect' => env('TWITTER_REDIRECT_URI') 
    ],

    Enable Sign in With Twitter

    You will need to enable 3-legged OAuth in the Twitter Developers Dashboard. Make sure to also add your callback URL.

    Add provider event listener

    Laravel 11+

    In Laravel 11, the default EventServiceProvider provider was removed. Instead, add the listener using the listen method on the Event facade, in your AppServiceProvider boot method.

    • Note: You do not need to add anything for the built-in socialite providers unless you override them with your own providers.

    Event::listen(function (\SocialiteProviders\Manager\SocialiteWasCalled $event) {
        $event->extendSocialite('twitter', \SocialiteProviders\Twitter\Provider::class, \SocialiteProviders\Twitter\Server::class);
    });
    Laravel 10 or below

    Configure the package’s listener to listen for `SocialiteWasCalled` events.

    Add the event to your listen[] array in app/Providers/EventServiceProvider. See the Base Installation Guide for detailed instructions.

    protected $listen = [
        \SocialiteProviders\Manager\SocialiteWasCalled::class => [
            // ... other providers
            \SocialiteProviders\Twitter\TwitterExtendSocialite::class.'@handle',
        ],
    ];

    Usage

    You should now be able to use the provider like you would regularly use Socialite (assuming you have the facade installed):

    return Socialite::driver('twitter')->redirect();

    Returned User fields

    • id
    • nickname
    • name
    • email
    • avatar

    Visit original content creator repository
    https://github.com/SocialiteProviders/Twitter

  • JBoot

    JBoot

    License Version Build Coverage

    JBoot is a utility for scheduling and executing system reboots with optional tasks using custom logic. JBoot allows you to schedule reboots for computers, customize actions to perform on reboot, and handle various use cases that involve system restarts.

    Table of Contents

    Introduction

    JBoot is designed to provide a simple and flexible way to schedule system reboots. It allows you to specify the desired reboot time, and optional tasks to run on reboot. JBoot also offers enhanced control over system restarts.

    Traditional reboot methods are often limited in their capabilities. JBoot aims to address these limitations by providing an intuitive interface to manage and execute reboots while allowing you to define custom behavior based on your requirements.

    Maven

    To use JBoot in your Maven project add this dependency to the dependencies section of the pom.xml file within your project.

    <dependency>
        <groupId>io.github.gageallencarpenter</groupId>
        <artifactId>JBoot</artifactId>
        <version>1</version>
    </dependency>

    Features

    • Schedule system reboots at specific dates and times.
    • Define custom actions or tasks to execute on reboot.
    • Flexible and customizable reboot management.
    • Enhanced control over system restarts.

    Usage

    1. Clone or download the JBoot repository to your local machine.
    2. Include the JBoot library in your project and import the necessary classes.
    3. Utilize the available methods to schedule and manage system reboots based on your requirements.

    Overview

    Method Use Case
    scheduleImmediateReboot() Schedule an immediate system reboot.
    scheduleRebootAt(Date rebootDate) Schedule a system reboot at a specific date and time.
    scheduleProgrammedReboot(Date rebootDate, String program) Schedule a system reboot at a specific date and time with a program to run on startup.
    private void scheduleTask(String program) Generate and execute a PowerShell script to schedule a task on reboot.
    private String generateScheduleScript(String program) Generate a PowerShell script for scheduling a task on reboot.
    private long calculateSecondsUntilReboot(Date rebootDate) Calculate the number of seconds until a scheduled reboot.
    private void executeShutdown(long secondsUntilReboot) Execute the system shutdown command to initiate the reboot.
    private void handleException(Exception e) Handle exceptions that might occur during the reboot scheduling process.

    Examples

    Here are a few examples demonstrating how to use JBoot to schedule and manage system reboots.

    Immediate Reboot

    This section demonstrates how to use the scheduleImmediateReboot() method to initiate an immediate system reboot.

    public static void main(String[] args) {
        RebootScheduler scheduler = new RebootScheduler();
        scheduler.scheduleImmediateReboot();
    }

    Scheduled Reboot

    This section provides an example of scheduling system reboots for a particular time using the RebootScheduler classes scheduleRebootAt(rebootDate) method.

    public static void main(String[] args) {
        RebootScheduler scheduler = new RebootScheduler();
        Date rebootDate = /* Specify a desired reboot date and time */;
        scheduler.scheduleRebootAt(rebootDate);
    }

    Scheduled Reboot With Restart Customization

    This section illustrates how to use the scheduleProgrammedReboot() method to schedule a system reboot at a specific date and time, along with a custom task to run on reboot.

    public static void main(String[] args) {
        RebootScheduler scheduler = new RebootScheduler();
        Date rebootDate = /* Specify a desired reboot date and time */;
        String customTask = /* Define a custom task to run on reboot (e.g. path/to/App.exe)*/;
        scheduler.scheduleProgrammedReboot(rebootDate, customTask);
    }

    Contributions

    Contributions to JBoot are welcome! If you’d like to contribute, please follow these steps:

    1. Fork the repository and create a new branch for your feature or bug fix.
    2. Make your changes and submit a pull request.
    3. Provide a clear description of your changes and their purpose.

    License

    JBoot is licensed under the MIT License.

    Visit original content creator repository https://github.com/GageAllenCarpenter/JBoot
  • docker-coldfusion10

    Docker CF10

    docker pull fridus/coldfusion10
    

    Features

    Create the docker

    docker run -d -p 8080:80 \
      -v /your/path:/var/www \
      fridus/coldfusion10

    Variable Default value
    COLDFUSION_ADMIN_PASSWORD Adm1n$
    COLDFUSION_SERIAL_NUMBER
    DATASOURCE_ARGS
    DATASOURCE_DB DATASOURCE_NAME
    DATASOURCE_HOST
    DATASOURCE_NAME
    DATASOURCE_PASSWORD empty
    DATASOURCE_USER root
    DATASOURCES In format JSON. DATASOURCE_HOST is the default host, DATASOURCE_USER is the default user
    ENABLE_HIBERNATE_DEBUG Set to true to keep hibernate debug log active
    JVM_JAVA_ARGS See jvm.config Overwrite java.args
    OUTPUT_LOGS false Set to true to add the apache and coldfusion logs to the output
    REDIS_DATABASE 0
    REDIS_HOST
    REDIS_PORT
    SCHEDULER_CLUSTER_CREATETABLES false
    SCHEDULER_CLUSTER_DSN
    SMTP_PORT_25_TCP_ADDR Mail server
    TIMEZONE Europe/Brussels

    With custom vhost

    docker run -d -p 8080:80 \
      -v /your/path:/var/www \
      -v /path/vhost/dir:/etc/apache2/sites-enabled \
      fridus/coldfusion10

    Example of custom

    <VirtualHost *:80>
      DocumentRoot /var/www/website/www
      <Directory />
        AllowOverride All
      </Directory>
    </VirtualHost>

    With server smtp

    With a link smtp, the mail server is automatically configured. The internal name must be smtp

    docker run -d -p 8080:80 \
      -v /var/www:/var/www \
      --link mailcatcher:smtp
      fridus/coldfusion10

    With a datasource configured

    One datasource

    • DATASOURCE_NAME: required
    • DATASOURCE_HOST: required
    • DATASOURCE_USER: root
    • DATASOURCE_PASSWORD: ""
    • DATASOURCE_DB: DATASOURCE_NAME if not defined
    • DATASOURCE_ARGS: optional

    docker run -d -p 8080:80 \
      -v /var/www:/var/www \
      --link mailcatcher:smtp
      -e DATASOURCE_NAME=mydatasource \
      -e DATASOURCE_HOST=`ip route get 1 | awk '{print $NF;exit}'` \
      fridus/coldfusion10

    Many datasources

    Use DATASOURCES in format JSON. DATASOURCE_HOST is the default host

    [{
      "database": "...",
      "name": "Data source name",
      "password": "...",
      "username": "..."
    }, {
      "database": "...",
      "name": "...",
      "password": "...",
      "username": "...",
      "host": "..."
    }, {
      "database": "..."
    }]

    docker run -d -p 8080:80 \
      -v /var/www:/var/www \
      --link mailcatcher:smtp
      -e DATASOURCES=`cat ./datasources.json` \
      -e DATASOURCE_HOST=`ip route get 1 | awk '{print $NF;exit}'` \
      fridus/coldfusion10

    Set serial number

    Activate your license, use env COLDFUSION_SERIAL_NUMBER.

    docker run -d -e COLDFUSION_SERIAL_NUMBER="1234-1234-1234-1234-1234-1234" \
      fridus/coldfusion10

    Set Admin password

    docker run -d -e COLDFUSION_ADMIN_PASSWORD="myPassword" fridus/coldfusion10

    Redis session

    With a link redis or environment variables

    Link

    • REDIS_DATABASE (default 0)

    Env

    • REDIS_HOST
    • REDIS_PORT
    • REDIS_DATABASE (default 0)

    Scheduler cluster

    Env

    • SCHEDULER_CLUSTER_DSN
    • SCHEDULER_CLUSTER_CREATETABLES
    docker run -d -e SCHEDULER_CLUSTER_DSN="tasks" fridus/coldfusion10

    Access

    • /CFIDE/administrator/index.cfm
    • The admin password for the coldfusion server is Adm1n$

    About

    Projet based on finalcut/coldfusion10

    Visit original content creator repository
    https://github.com/Fridus/docker-coldfusion10

  • kademlia-c

    kademlia-c

    Kademlia is an ingenious distributed hash-table (DHT) protocol that makes use of the XOR metric to measure “distance” between peers. Keys and values are given to the K-closest peers to the hash (256-bits in our case) of the given key. Because of the XOR metric, we can find the K-closest peers in time proportional to the log of the size of the network. That means in a network with a million peers, it might only take about 20 steps to find those responsible for a given key.

    This is a work in-progress. I am writing it in C because I like the challenge.

    External Dependencies

    The two external dependencies that cannot be resolved via the git submodule command below are libuv and openssl. The first is a cross-platform library used for asynchronous I/O operations. The second is a library used for the SHA256 hash function.

    jonab@MacBookPro kademlia-c % brew install libuv
    jonab@MacBookPro kademlia-c % brew install openssl

    jonab@Ubuntu kademlia-c % sudo apt install libuv1-dev
    jonab@Ubuntu kademlia-c % sudo apt install [OPENSSL LIB NAME, libssl-dev?]

    Building

    We use standard CMake to build the library and testing suite. For example, these are the commands I run to test locally.

    jonab@MacBookPro kademlia-c % cmake -S . -B ../kademlia-build
    jonab@MacBookPro kademlia-c % make -C ../kademlia-build
    jonab@MacBookPro kademlia-c % ../kademlia-build/kademlia-tests

    You might need to run this command beforehand to pull our submodules.

    jonab@MacBookPro kademlia-c % git submodule update --init --recursive --depth=1

    Milestones

    • Uint256, Contact, OrderedDict, and Bucket.
    • Routing table (without protocol for replacing old peer).
    • Contact heap.
    • Crawler and contract crawler.
    • Protocol.
    • Client.
    • Client refreshing.

    Stretch goals

    • More efficient OrderedDict.

    Visit original content creator repository
    https://github.com/jontab/kademlia-c

  • Transfer-Learning-Edge-Devices

    Transfer Learning for Edge Devices

    Create your own features using custom data and pre-trained models.
    Utilizing google colab notebook we take an existing model and re-train to detect our own custom object.

    Introduction

    Creating deep learning models from scratch can be time consuming and not the best use of your time. We are fortunate to have many pre-trained models available that we can use as a starting point for our own applications. We can use transfer learning to re-train only a small portion that we need in order to develop our feature or solve our problem.

    For this project we will be re-training a base Model SSD-mobilenet-v1 pre-trained on the PASCAL VOC dataset for object detection. We will download data from Google Open Images to retrain the model to detect a custom object class (vehicle license plate). We will then convert and deploy the model onto an Nvidia Jetson Nano.

    Download dataset for training

    Our base model can detect 20 objects out of the box but we want to re-train it to detect license plates. We will be using Google Open Images to download our dataset. We have a choice of 600 different classes but we will be using ‘Vehicle registration plate’ for our model. Using the website’s provided script, we end up downloading 6800 images conveniently split up into test/train/validation. This process can be done with any number of classes but we will be using one class for our example.

    Dataset Name: Google Open Images dataset v6 (classes)
    Dataset Link : LINK
    Dataset Size: [3.18] GB
    Dimensions: Width [1024, 480], Height [1024,575]
    ‘# of images: 6859 [.JPEG]
    ‘# of classes: 1. (‘Vehicle registration plate’)
    ‘# of images per class: 6859 [(Train 5365) , (Test 1109) , (Validation 385)]
    Images file size: [.035, 1.9] MB

    The images themselves vary quite a bit in terms of viewing depth/angles and license plate types. We will be trying to detect USA license plates but our dataset includes plates from all over the world.

    Another big limitation of this dataset is its high variance in viewing angles, distances, cars and plate types. Unfortunately, many of the highly curated datasets related to this particular object class are proprietary and hard to find.

    Re-training existing model

    For this project we first re-trained on the Jetson Nano 2gb using the provided Jetson Utitlities script with 100 epochs.

    The 2Gb Nano is not very powerful for training and as a result takes almost 5 days to train. A faster way to do this is by using Google Colab or your own higher powered GPU. We trained 100 epochs in 6.7 Hours using Google Colab on an Nvidia T4. The average time per epoch on the Nano was 86. mins while Google Colab took 4 mins per epoch.

    At the end of our training we achieved a loss of 2.78.

    Model Evaluation

    The next step is to evaluate our model against some test data. Fortunately, during our initial download of our Open Images dataset we created a test folder we can use to evaluate. The test folder has 1109 Images. Running our evaluation script, we achieve a mAP of .569. Getting more than half recognition is ok performance but again the test set pictures are quite variable. Some pre-trained license plate detection models can have mAP of over 90%. The next step is to convert this model and deploy it to our Jetson Nano.

    Export and Deploy to Edge Device’

    After re-training our model we need to convert it to an .onnx file to load onto the Jetson Nano.
    ONNX is an open model format that supports many of the popular ML frameworks, including PyTorch, TensorFlow, TensorRT, and others, so it simplifies transferring models between tools.
    We can use the onnx_export.py script provided by jetson utilities. Then with TensorRT (pre-loaded onto the nano) we can optimize the model for deployment on the Jetson Nano.

    Running in real-time on the Jetson Nano we get a consistent 30fps.

    In these clips, we ran a youtube video and pointed a web-camera at the screen for real-time detection. Overall the device captured most vehicles that were on screen, the caveat being the viewing angles were very tight. Viewing angles beyond 30* don’t seem to get detected.

    Potential Applications

    We are only detecting one object in this example to save training time and to deploy on a low end edge device. Even with a Jetson Nano (entry level compute product) we can get good real-time detection. We could potentially expand this to detect several objects of our choosing like pose estimation or facial recognition. Perhaps we can make a smart doorbell or a room monitor.

    Some other potential applications include medical imaging and sensing, smarter NLP and chatbots, computer vision problems, forecasting specific events.

    Pros and Cons of Transfer Learning Method

    The Pros of Transfer Learning

    • Increase performance vs. starting from scratch
    • Saves time
    • Doesn’t necessarily need massive amounts of data

    The Cons of Transfer Learning

    • Need model that’s somewhat related to your task, sometimes they don’t exist
    • Risk of negative transfer where model ends up performing worse

    Conclusion

    The new paradigm of transfer learning allows for smaller organizations and individuals to create their specific features for their specific application. This is more akin to learning from previous generations and building upon that collective knowledge. Using pre-trained models saves us time and energy to try and develop our own unique features and uses.

    Transfer learning is popular because of its wide range of applications but not all problems are suited for this type of method. Identifying the problem beforehand will greatly aid you in figuring out the best method to deploy. Overall, transfer learning techniques are here to stay and will continue to be an important part of AI and Machine Learning applications in the future.

    Visit original content creator repository https://github.com/particleman14/Transfer-Learning-Edge-Devices