Category: Blog

  • Podcast-Feed-Extractor

    Podcast-Feed-Extractor

    Getting all your RSS feed addresses from your iTunes directory

    This is a follow-up from my POD-igy repo.

    I realized that there was no easy way to extract the RSS feeds from your podcast app. I use iTunes on my desktop for downloading my podcasts. So I created this repo that extracts the podcasts from your iTunes folder and provides you with the RSS feeds.

    Instructions:

    1. Download this repo to your disk.
    2. Open your command prompt and navigate to this repo
    3. Enter the repo:
      cd Podcast-Feed-Extractor
    4. Install the requirements:
      pip install -r requirements.txt
    5. Find the path to your iTunes folder. By default, on Windows, it is installed at “C:\Users<user>\Music\iTunes”
    6. Run this code:
      python3 podcast.py <path of the iTunes folder as identified in step 5>
    7. The output files are as follows:
      1. A pickle file (podcast_feed.pkl) (if you want to load the dictionary { podcast_name : rss_feed} for further manipulation in python
      2. A text file (error_podcasts): A text file that contains the names of podcasts whose RSS feeds could not be identified
      3. A text file (rss_feeds): A text file that contains the name of podcasts and the RSS URL for each podcast

    Sample output files are available in the sample_outputs folder

    Screenshots from command window

    1. List of podcasts in the iTunes library


    2. List of podcasts whose RSS feeds could not be extracted


    3. List of podcasts with the url of the RSS feeds


    Visit original content creator repository https://github.com/SwamiKannan/Podcast-Feed-Extractor
  • DigitalBrewery

    🍺 Digital Brewery

    Unreal Engine Azure Blender

    Índice

    1. Descripción del Proyecto
    2. Tecnologías Utilizadas
    3. Diagrama de Arquitectura en Capas
    4. Configuraciones
    5. Notas Importantes

    Descripción del Proyecto

    Digital Brewery es un gemelo digital desarrollado en Unreal Engine 5 que simula y visualiza el proceso completo de la cervecería USACA, este utiliza inteligencia artificial y tecnologías de Azure para crear una experiencia inmersiva y en tiempo real. El proyecto integra modelos 3D realizados en Blender, simulación avanzada con Azure Digital Twins, procesamiento de datos mediante Azure PostgreSQL, y capacidades de interacción utilizando Azure OpenAI.

    🚀 Tecnologías Utilizadas

    • Unreal Engine 5: Motor gráfico para la creación de experiencias visuales y simulaciones en 3D.
    • Azure Digital Twins: Plataforma para modelar y simular activos físicos, procesos y entornos.
    • Azure OpenAI: Servicios de IA para interacción natural con el gemelo digital.
    • Azure PostgreSQL: Base de datos relacional escalable y segura.
    • Blender: Herramienta para el modelado, texturizado y animación 3D.

    Diagrama de Arquitectura en Capas

    Requisitos del Sistema

    • Hardware:

      • CPU: Procesador de 8 núcleos o más
      • GPU: Tarjeta gráfica compatible con DirectX 12
      • RAM: 16 GB o más
      • Almacenamiento: 10 GB de espacio disponible
    • Software:

      • Unreal Engine 5.x
      • Blender 3.x
      • Azure CLI (para la configuración de servicios)
      • Visual Studio 2022 con las herramientas de desarrollo de Unreal Engine

    Clonación del Repositorio

    git clone https://github.com/MillerMosquera/DigitalBrewery.git
    cd DigitalBrewery

    Notas Importantes

    Important

    Asegurar la optimización y sincronización de los datos en tiempo real entre los sistemas físicos y el gemelo digital.

    Note

    Mantener actualizada la documentación de cada una de las configuraciones y cambios realizados.

    Visit original content creator repository https://github.com/MillerMosquera/DigitalBrewery
  • OdooRPC

    C++ OdooRPC

    author: Gallay David

    Goals

    • lightweight
    • minimal
    • few dependencies (libcurl and std, that’s all)
    • but still simple

    future developments

    Only fix will be done here.
    For higher level library, see: OdooCpp which use this library and some other as nlohmann/json.

    Use

    1. Create a client

      const std::string URL = "my-url.com";
      const std::string DATABASE = "my-database";
      
      // Create Credentials
      Credentials creds("login", "password");
      
      OdooRPC client (
          URL,
          DATABASE,
          creds
      );

      could be written all-in-one

      OdooRPC client (
          "my-url.com",
          "my-database",
          {
          	"login",
          	"password"
          }
      );

      This will automatically authenticate the user and retrieve it’s uid which is needed for most of the queries.
      The uid is related to the database, it is stored into the RPC client.

    2. use raw_query method

      This will return the request body as a string.
      It could be parsed using any json library, for example nlohmann/json which is header-only.

      Knowing the follwing function in python exists:

      def search_read(self, domain=None, fields=None, offset=0, limit=None, order=None):
      	...

      We can use it so:

      std::cout << client.raw_query(
          "res.partner",
          "search_read",
          {
              "[]",                        // domain
              R"(["name", "user_id"])",    // fields
              0,                           // offset
              5                            // limit
          }
      ) << std::endl;

      It asks for fields “name” and “user_id” for the 5 first partners starting at offset 0 in the database with default order

      • Args are set in the same order as for the function in python.
      • Note the use of C++11 Raw-string-literals, params muse be given in json format

      We can also manually create our jsonrpc body if really needed

    Visit original content creator repository
    https://github.com/divad1196/OdooRPC

  • hazmat-math

    hazmat-math

    Hazmat ECC arithmetic for Cryptography.io

    Hazmat math implements some basic ECC arithmetic for use with Cryptography.io objects using the OpenSSL backend.
    Specifically, _EllipticCurvePrivateKey and _EllipticCurvePublicKey.

    Any operations with EC_POINT will return an _EllipticCurvePublicKey and any operations with BN will return an _EllipticCurvePrivateKey.

    Usage:

    from cryptography.hazmat.backends import default_backend()
    from cryptography.hazmat.primitives.asymmetric import ec
    
    from hazmat_math import operations as ops
    
    
    priv_a = ec.generate_private_keys(ec.SECP256K1(), default_backend())
    priv_b = ec.generate_private_keys(ec.SECP256K1(), default_backend())
    
    pub_a = priv_a.public_key()
    pub_b = priv_b.public_key()
    
    # Multiplication
    priv_c = ops.BN_MOD_MUL(priv_a, priv_b)
    pub_c = ops.EC_POINT_MUL(pub_a, priv_a)
    
    # Division
    priv_c = ops.BN_DIV(priv_a, priv_b)
    
    # Inversion
    inv_a_priv = ops.BN_MOD_INVERSE(priv_a)
    inv_a_pub = ops.EC_POINT_INVERT(pub_a)
    
    # Addition
    priv_c = ops.BN_MOD_ADD(priv_a, priv_b)
    pub_c = ops.EC_POINT_ADD(pub_a, pub_b)
    
    # Subtraction
    priv_c = ops.BN_MOD_SUB(priv_a, priv_b)
    pub_c = ops.EC_POINT_SUB(pub_a, pub_b)
    
    # Get generator point from curve
    gen_point = ops.CURVE_GET_GENERATOR(ec.SECP256K1())
    
    # Get order of curve
    order = ops.CURVE_GET_ORDER(ec.SECP256K1())
    

    Installation:

    1. Clone or download the repository
    2. Ensure that you have cryptography.io install (pip install cryptography)
    3. python setup.py install

    TODO:

    1. Testing!
    2. Get setup on pypy.

    Visit original content creator repository
    https://github.com/tuxxy/hazmat-math

  • TimeTracker

    Time Tracker

    This project allows tracking the amount of time spent on projects.

    Categories and subcategories can be used to organize projects. Entries can then be added to these projects. The total time spent on all projects can then be tracked, along with the amount of time spent per project, category and subcategory.

    I am currently seeking employment as a developer.
    This project shows my ability to understand, work with, and implement the following:

    C#

    Winforms

    SQL Server

    Dapper

    Separate Core logic from the UI into a library

    Use NuGet packages

    Dependency Injection

    Please feel free to contact me if interested in talking about this project, C#, or employment opportunities. Thanks for reading!

    Kyle Givler – https://www.linkedin.com/in/kyle-givler/

    Time Tracker Notes:

    This program will automatically (re-)create the SQLite database if it does not exist.

    Unfortunately automatically creating the database turned out to be harder than I expected for SQL Server using Dapper. Due to this the database will not be created automatically. Therefore, if using the SQL Server database option I would recommend publishing the database from the SQL Server database project in the solution. Another option is using the provided script (TimeTrackerDB.sql) after checking to see if any modification are needed for your setup. For SQL Server the connection string will need to be modified in appsettings.json.

    The default database is SQLite. Valid settings for “DatabaseType” in appsettings.json are SQLite and MSSQL.

    I would love to hear any suggestion on how to automatically create the database, tables and stored procedure if they do not already exists for SQL Server using Dapper.

    Visit original content creator repository
    https://github.com/JoyfulReaper/TimeTracker

  • nyoom.nvim

    Nyoom.nvim

    Fennel Stars GitHub Issues Forks License Discord Server

    These are your father’s parentheses.
    Elegant weapons for a more… civilized age.
    xkcd/297

    merged

    Nyoom.nvim was an answer to abstracted and complex codebases that take away end-user extensibility, try to be a one-size-fits-all config, and needlessly lazy load everything. It solves this problem by providing a set of well integrated modules similar to doom-emacs. Modules contain curated plugins and configurations that work together to provide a unified look and feel across all of Nyoom. The end goal of nyoom.nvim is to be used as a framework config for users to extend and add upon, leading to a more unique editing experience.

    Nyoom can be anything you’d like. Enable all the modules for the vscode-alternative in you, remove some and turn it into the prose editor of your dreams, or disable everything and have a nice set of macros to start your configuration from scratch!

    At its core, Nyoom consists of a set of intuitive macros, a nice standard library, a set of modules, and some opinionated default options, and nothing more.

    Designed against the mantras of doom-emacs doom-emacs:

    • Gotta go fast. Startup and run-time performance are priorities.
    • Close to metal. There’s less between you and vanilla neovim by design. That’s less to grok and less to work around when you tinker.
    • Opinionated, but not stubborn. Nyoom (and Doom) are about reasonable defaults and curated opinions, but use as little or as much of it as you like.
    • Your system, your rules. You know better. At least, Nyoom hopes so! There are no external dependencies (apart from rust), and never will be.
    • Nix/Guix is a great idea! The Neovim ecosystem is temperamental. Things break and they break often. Disaster recovery should be a priority! Nyoom’s package management should be declarative and your private config reproducible, and comes with a means to roll back releases and updates (still a WIP).

    It also aligns with many of Doom’s features:

    • Minimalistic good looks inspired by modern editors.
    • A modular organizational structure for separating concerns in your config.
    • A standard library designed to simplify your fennel bike shedding.
    • A declarative package management and module system (inspired by use-package, powered by Packer.nvim). Install packages from anywhere, and pin them to any commit.
    • A Space(vim)-esque keybinding scheme, centered around leader and localleader prefix keys (SPC and SPCm).
    • Project search (and replace) utilities, powered by ripgrep, and telescope.
    • Per-file indentation style detection and [editorconfig] integration. Let someone else argue about tabs vs spaces
    • Support for modern tooling and navigation through language-servers, null-ls, and tree-sitter.

    For more info, checkout our (under construction) FAQ

    Prerequisites

    • Neovim v0.8.1+
    • Git
    • Ripgrep 11.0+

    Nyoom works best with a modern terminal with Truecolor support. Optionally, you can install Neovide if you’d like a gui.

    Nyoom is comprised of optional modules, some of which may have additional dependencies. Run :checkhealth to check for what you may have missed.

    Install

    git clone --depth 1 https://github.com/nyoom-engineering/nyoom.nvim.git ~/.config/nvim 
    cd ~/.config/nvim/
    bin/nyoom install 
    bin/nyoom sync

    Then read getting started to be walked through installing, configuring and maintaining Nyoom Nvim.

    It’s a good idea to add ~/.config/nvim/bin to your PATH! Other bin/nyoom commands you should know about:

    • nyoom sync to synchronize your private config with Nyoom by installing missing packages, removing orphaned packages, and regenerating caches. Run this whenever you modify your packages.fnl and modules.fnl
    • nyoom upgrade to update Nyoom to the latest release
    • nyoom lock to dump a snapshot of your currently installed packages to a lockfile file.

    Getting help

    Neovim is no journey of a mere thousand miles. You will run into problems and mysterious errors. When you do, here are some places you can look for help:

    • Our Documentation covers many use cases.
    • The builtin :help is your best friend
    • To search available keybinds: <SPC>fk
    • Run :check health to detect common issues with your development environment.
    • Search Nyoom’s issue tracker in case your issue was already reported.
    • Hop on our Discord server ; it’s active and friendly!

    If you have an issue with a plugin in Nyoom.nvim, first you should report it here. Please don’t bother package maintainers with issues that are caused by my configs, and vice versa.

    Roadmap

    (under construction)

    Contribute

    PRs Welcome

    Checkout the Contributor Guide

    • I love pull requests and bug reports!
    • Don’t hesitate to tell me my lisp-fu sucks, but please tell me why.
    • Don’t see support for your language, or think it should be improved? Feel free to open an issue or PR with your changes.

    Credits

    • David Guevara For getting me into fennel, and for some of his beautiful macros. Without him Nyoom wouldn’t exist!
    • Oliver Caldwell For his excellent work on Aniseed, Conjure, and making fennel feel like a first class language in neovim
    Visit original content creator repository https://github.com/nyoom-engineering/nyoom.nvim
  • dataJobs_project

    dataJobs_project

    Se espera crear un modelo para predecir las habilidades más demandadas en el mercado laboral. Esto puede ayudar a los profesionales, estudiantes y la industria en general a tomar decisiones informadas sobre qué habilidades desarrollar y mejorar.

    El objetivo principal del proyecto es analizar las ofertas laborales en el campo de los datos y con el propósito de identificar patrones y tendencias que revelen las habilidades más demandadas en el mercado laboral actual. Esta investigación puede ayudar a las personas interesadas en este ámbito o que quieran meterse a este, ayudando a determinar si es la predicción deseada como una decisión laboral. Todo esto mediante el dataset de kaggle Data Jobs Listings – Glassdoor.

    Archivos del repositorio

    • EDA_project.ipynb: Esta todo el analisis exploratorio de las tablas usadas que son “glassdoor.csv”, “glassdoor_benefits_highlights.csv” y “glassdoor_salary_salaries.csv” (descargar estos csv para la ejecucion del codigo).

    • ETL_project.pdf: Es el documento en el cual estan descritas las fases realizadas y explicadas a detalles.

    • dataJobs_script.py: Este el codigo principal en el que se realizo la conexion a postgresql, se cargaron los datos a la BD, se reemplazaron los valores nulos en algunas columnas, campos vacios y normalizaciones para los titulos del trabajo.

    • df_tocsv_and_transfomations.py: Aqui se tuvo que usar pandas para poder incrustar correctamente el csv y donde se realizo la mayor transformación de datos para la normalizacion del jobTitle.

    • dimesions_script.py: Aqui se hace la inserción de datos en las ds dimensiones que se va a hacer uso mas en el futuro del proyecto.

    • project_dashboard.pdf: Es el dashboard que realice en power bi, con tres graficas por ahora.

    ¿Como correr los scripts?

    1. Clone el repositorio con https://github.com/VinkeArtunduaga/dataJobs_project.git
    2. Instale python 3.11
    3. Instale la base de datos PostgreSQL
    4. Para las librerias es necesario hacer pip install psycopg2, pip install pandas y seran instaladas, tambien se usan json y csv pero se supone que estan predeterminadas.
    5. Crear un usuario y contraseña para el uso de postgreSQL
    6. Crear una database en pgAdmin 4 llamada ETL (asi fue como le puse el nombre a la mia pero se puede cambiar)
    7. Cambie las configuraciones de la conexion a la base de datos segun el usuario, contraseña y database asignados
    8. Corra primero el codigo de df_tocsv_and_transformations.py en la terminal mediante python df_tocsv_and_transformations.py
    9. Luego python dataJobs_script.py para de esta manera crear la tabla principal con sus normalizaciones y limpieza de nulos.
    10. Para luego correr el de la creacion de las dimesiones python dimensions_script.py

    En caso de querer realizar el proceso de EDA:

    1. Descargar Jupyter para mas facilidad con Jupyter lab
    2. Al ya tener descargadas las librerias de json y pandas o no haber podido ser instaladas ejecutar en la terminal pip install pandas y pip install json
    3. Cambiar la direccion de donde se encuentran los archivos csv de glassdoor.csv, glassdoor_benefits_highlights.csv y glassdoor_salary_salaries.csv.
    4. Al ejecutar cada uno de los bloques o de corrido deberia apreciarse el analisis.

    Para la segunda parte del proyecto hice una carpeta llama API ahi estan todo lo que se realizo.

    Visit original content creator repository
    https://github.com/VinkeArtunduaga/dataJobs_project

  • ngx-kjua

    NPM version Downloads PRs Welcome

    If you find my work useful you can buy me a coffee, I am very thankful for your support.

    Buy Me A Coffee

    ngx-kjua

    Angular QR-Code generator component.

    This is basically an Angular-wrapper for kjua by Lars Jung.

    Breaking changes v16.1.0

    Fromm the v16.1.0 this library use the standalone component, and not the module anymore.
    See how to implement it

    Demo

    Demo

    StackBlitz

    StackBlitz Example for encoding Contacts, Calendar entries, WiFi-settings and more. You can use iPhone’s default Camera App, it will decode QR-Codes!

    StackBlitz Example with 300 codes at once (async rendering)

    StackBlitz Example for generating a PDF with jspdf

    Installation

    To install this package, run:

    npm i ngx-kjua --save

    Then import it into your Angular AppModule:

    // Common imports
    import { NgModule /* , ... */ } from '@angular/core';
    
    // Import the package's module
    import { NgxKjuaComponent } from 'ngx-kjua';
    
    @NgModule({
        declarations: [ /* AppComponent ... */ ],
        imports: [
        
            // BrowserModule, 
            // ...
            
            NgxKjuaComponent,
            
            // other imports...
        ],
        // ...
    })
    export class AppModule { }

    Usage

    Once the package is imported, you can use it in your Angular application:

    Basic

      <ngx-kjua
        [text]="'hello'"
      ></ngx-kjua>

    Advanced

      <ngx-kjua
        [text]="'hello'"
        [renderAsync]="false"
        [render]="'svg'"
        [crisp]="true"
        [minVersion]="1"
        [ecLevel]="'H'"
        [size]="400"
        [ratio]="undefined"
        [fill]="'#333'"
        [back]="'#fff'"
        [rounded]="100"
        [quiet]="1"
        [mode]="'plain'"
        [mSize]="30"
        [mPosX]="50"
        [mPosY]="50"
        [label]="'label text'"
        [fontname]="'sans-serif'"
        [fontcolor]="'#ff9818'"
        [image]="undefined"
        [cssClass]="'image-auto'"
      ></ngx-kjua>

    Options

    Caution: When adding images, label or similar, this will reduce the readability of the QR-code. Consider using a higher error correction level (e.g. L) in those cases.

    Crisp

    As you can set the size of the image, the amount of ‘modules’ (black/white boxes that make up the QR-code) is calculated based on the size and the amount of quiet modules. The calculation can result in an odd number so that a module is e.g. 4.5 pixels big. The resulting image will be drawn fuzzy if crisp is set to false. Setting it to true will result in ‘sharp’ lines.

    crisp false

    crisp true

    Label

    Kjua lets you embed a text or image to the code. This can be set with the setting mode. This can reduce the readability of the code!

    Image

    Image as Code

    Clear + Image

    This mode let’s you “cut out” parts of the QR-code and at the same time add an image.

    labelimage, imagelabel and clearimage

    Use this, if you want a label AND an image. In these modes mSize, mPosX and mPosY can be provided as an array. In mode labelimage, the first value (index 0) of the mSize, mPosX and mPosY arrays is used for the label, the second value (index 1) is used for image and vice versa. Also in labelimage mode, the label is drawn before the image is drawn and therefore kinda “in the background” if the two overlap.

    All options

    • text encoded content (defaults to “)
    • render render-mode: ‘image’, ‘canvas’, ‘svg’ (defaults to image)
    • crisp render pixel-perfect lines (defaults to true)
    • minVersion minimum version: 1..40 (defaults to 1)
    • ecLevel error correction level: ‘L’, ‘M’, ‘Q’ or ‘H’ (defaults to L)
    • size size in pixel (defaults to 200, min 24 or higher, depend on how much character you’re using)
    • fill code color (defaults to #333)
    • back background color (defaults to #fff, for transparent use '' or null)
    • rounded roundend corners in pc: 0..100 (defaults to 0, not working if renderis set to svg)
    • quiet quiet zone in modules (defaults to 0)
    • mode modes: ‘plain’, ‘label’, ‘image’ or ‘clear’ (defaults to plain, set label or image property if you change this)
    • mSize label/image size in pc: 0..100 (defaults to 30)
    • mPosX label/image pos x in pc: 0..100 (defaults to 50)
    • mPosY label/image pos y in pc: 0..100 (defaults to 50)
    • label additional label text (defaults to “)
    • fontname font for additional label text (defaults to sans-serif)
    • fontcolor font-color for additional label text (defaults to #333)
    • fontoutline draw an outline on the label text in the color of the back (defaults to true)
    • image additional image (defaults to undefined, use an HTMLImageElement or base64-string)
    • imageAsCode draw the image as part of the code (defaults to false)
    • renderAsync weather or not rendering is done inside “requestAnimationFrame”-call (defaults to false, use true if you want to generate more than one code (e.g. batch))
    • cssClass additional css-class that will be appended to the div-container that contains the qr-code (defaults to undefined)

    More details can be found on larsjung.de/kjua

    Async rendering

    If you plan to render more than one barcode (e.g. batch-generation) I recommend using renderAsync-flag. It executes the rendering inside a “requestAnimationFrame”-call.

    Encoding Contacts, Calendar entries, WiFi-settings, …

    The component comes with a helper-class (QrCodeHelper), that helps you with generating Codes that have information like a Contact encoded. Currently it supports the generation of:

    • SMS: number with optional pre-defined text
    • Call
    • Geo-Information: a point on the map with Latitude and Longitude
    • Events
    • Email: recipient with an optional subject and text
    • WiFi: SSID with optional password and a flag for hidden WiFis
    • Contact Information: name with optional address, telephone-number(s), email, url.

    Contact Encoding is done with MECard-format and NOT VCard! VCard gives a longer string and therefore a bigger code which potentially has a negative impact on readability for scanners. You can, of course, create a VCard string as well but the format is more complex.

    Generate PDF

    See the example above. It works with pure kjua and has in fact nothing to do with ngx-kjua but I thought somebody might find it useful.

    Visit original content creator repository https://github.com/werthdavid/ngx-kjua
  • Cplusplus

    Cplusplus

    This repository contains a number of programs I made in C++ to practice data structures and algorithms. Most of the questions, at the moment are from Coding Block’s Launchpad course. All the .cpp files contain the problem statement in a comment in the starting of the program.

    Arrays

    This folder contains programs using:-

    • 1-D Arrays
    • 2-D Arrays
    • Strings
      [BOTH STATIC AND DYNAMIC]

    Bitmasking

    This folder contains programs that:-

    • solve problems more efficiently by manipulating the bits of the provided data.

    Fundamentals

    This folder contains programs about:-

    • Basic concepts such as:
      • Datatypes
      • Type conversions
      • Pointers
    • Loops
      • Patterns
    • Different types of operators, including:
      • Arithmetic
      • Logical
      • Bitwise
      • Logical
      • Conditional
      • Miscellaneous
        • Dereference (*)
        • Address of (&)

    Linked Lists

    This folder contains problems regarding linked lists such as:-

    • Sorting
    • Searching
    • Insertion
    • Deletion

    Both doubly and singly linked lists are used. Linked lists have been implemented through self referential structures / classes.

    Searching and Sorting

    This folder contains programs using:-

    • Linear Search
    • Binary Search
    • Bubble Sort
    • Selection Sort
    • Insertion Sort
    • Wave Sort
    • Counting Sort
    • Inbuilt sort function

    Number Theory

    This folder consists of programs made using the basics of number theory

    Recursion

    In this folder, a variety of different problems and algorithms have been implemented using recursion.

    The concept of backtracking is also used in some problems.

    Searching and Sorting

    This folder consists of various algorithms used for searching and sorting

    • Linear Search
    • Binary Search
    • Bubble Sort
    • Counting Sort
    • Insertion Sort
    • Wave Sort

    note:- Programs for merge sort and quick sort are in the recursion folder since they have been implemented recursively and not iteratively.

    Stacks and Queues

    This folder consists of programs having practical usage of stacks and queus and the implementation of stacks and queues using:-

    • Static arrays
    • Dynamic arrays
    • Linked lists
    • Vectors

    Visit original content creator repository
    https://github.com/ishita-gambhir/Cplusplus

  • arcus-java-client

    arcus-java-client: Arcus Java Client CI License

    This is a fork of spymemcached with the following modifications to support Arcus memcached cloud.

    • Collection data types
      • List: A doubly-linked list.
      • Set: An unordered set of unique data.
      • Map: An unordered set of <field, value>.
      • B+Tree: A B+Tree structure similar to sorted map.
    • ZooKeeper based clustering

    JDK Requirements

    Compatible with jdk version

    • runtime requirements : At least 1.6
    • build requirements : At least 1.8

    Getting Started

    The Maven artifact for arcus java client is in the central Maven repository. To use it, add the following dependency to your pom.xml.

    <dependencies>
        <dependency>
            <groupId>com.navercorp.arcus</groupId>
            <artifactId>arcus-java-client</artifactId>
            <version>1.14.1</version>
        </dependency>
    </dependencies>

    Building

    To build your own library, simply run the following maven command:

    $ mvn clean install
    
    # Test cases may not run properly if you do not already have memcached
    # and ZooKeeper installed on the local machine.  To skip tests, use skipTests.
    
    $ mvn clean install -DskipTests=true
    

    Running Test Cases

    Before running test cases, make sure to set up a local ZooKeeper and run an Arcus memcached instance. Several Arcus specific test cases assume that there is an Arcus instance running, along with ZooKeeper.

    First, make a simple ZooKeeper configuration file. By default, tests assume ZooKeeper is running at localhost:2181.

    $ cat test-zk.conf
    # The number of milliseconds of each tick
    tickTime=2000
    # The number of ticks that the initial 
    # synchronization phase can take
    initLimit=10
    # The number of ticks that can pass between 
    # sending a request and getting an acknowledgement
    syncLimit=5
    # the directory where the snapshot is stored.
    dataDir=/home1/openarcus/zookeeper_data
    # the port at which the clients will connect
    clientPort=2181
    maxClientCnxns=200
    

    Second, create znodes for one memcached instance running at localhost:11211. ZooKeeper comes with a command line tool. The following script uses it to set up the directory structure.

    $ cat setup-test-zk.bash
    
    ZK_CLI="./zookeeper/bin/zkCli.sh"
    ZK_ADDR="-server localhost:2181"
    
    $ZK_CLI $ZK_ADDR create /arcus 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_list 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_list/test 0
    $ZK_CLI $ZK_ADDR create /arcus/client_list 0
    $ZK_CLI $ZK_ADDR create /arcus/client_list/test 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_server_mapping 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_server_log 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_server_mapping/127.0.0.1:11211 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_server_mapping/127.0.0.1:11211/test 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_server_mapping/127.0.0.1:11212 0
    $ZK_CLI $ZK_ADDR create /arcus/cache_server_mapping/127.0.0.1:11212/test 0
    

    Now start the ZooKeeper instance using the configuration above.

    $ ZOOCFGDIR=$PWD ./zookeeper/bin/zkServer.sh start test-zk.conf
    

    And, start the memcached instance.

    $ /home1/openarcus/bin/memcached -E /home1/openarcus/lib/default_engine.so -p 11211 -z localhost:2181
    

    Finally, run test cases.

    $ mvn test
    [...]
    Results :
    
    Tests run: 722, Failures: 0, Errors: 0, Skipped: 8
    
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 3:17.308s
    [INFO] Finished at: Thu Mar 06 13:42:58 KST 2014
    [INFO] Final Memory: 9M/722M
    [INFO] ------------------------------------------------------------------------
    

    API Documentation

    Please refer to Arcus Java Client User Guide for the detailed usage of Arcus java client.

    Issues

    If you find a bug, please report it via the GitHub issues page.

    https://github.com/naver/arcus-java-client/issues

    Arcus Contributors

    In addition to those who had contributed to the original libmemcached, the following people at NAVER have contributed to arcus-java-client.

    Chisu Yu (netspider) chisu.yu@navercorp.com; puseori9th@gmail.com
    Hoonmin Kim (harebox) hoonmin.kim@navercorp.com; harebox@gmail.com
    YeaSol Kim (ngleader) sol.k@navercorp.com; ngleader@gmail.com
    SeongHwa Ahn getconnected@sk.com; ash@nhn.com
    HyongYoub Kim hyongyoub.kim@navercorp.com

    License

    Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

    Patents

    Arcus has patents on b+tree smget operation. Refer to PATENTS file in this directory to get the patent information.

    Under the Apache License 2.0, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable patent license is granted to any user for any usage. You can see the specifics on the grant of patent license in LICENSE file in this directory.

    Visit original content creator repository https://github.com/naver/arcus-java-client