I realized that there was no easy way to extract the RSS feeds from your podcast app. I use iTunes on my desktop for downloading my podcasts. So I created this repo that extracts the podcasts from your iTunes folder and provides you with the RSS feeds.
Instructions:
Download this repo to your disk.
Open your command prompt and navigate to this repo
Enter the repo: cd Podcast-Feed-Extractor
Install the requirements: pip install -r requirements.txt
Find the path to your iTunes folder. By default, on Windows, it is installed at “C:\Users<user>\Music\iTunes”
Run this code: python3 podcast.py <path of the iTunes folder as identified in step 5>
The output files are as follows:
A pickle file (podcast_feed.pkl) (if you want to load the dictionary { podcast_name : rss_feed} for further manipulation in python
A text file (error_podcasts): A text file that contains the names of podcasts whose RSS feeds could not be identified
A text file (rss_feeds): A text file that contains the name of podcasts and the RSS URL for each podcast
Digital Brewery es un gemelo digital desarrollado en Unreal Engine 5 que simula y visualiza el proceso completo de la cervecería USACA, este utiliza inteligencia artificial y tecnologías de Azure para crear una experiencia inmersiva y en tiempo real. El proyecto integra modelos 3D realizados en Blender, simulación avanzada con Azure Digital Twins, procesamiento de datos mediante Azure PostgreSQL, y capacidades de interacción utilizando Azure OpenAI.
🚀 Tecnologías Utilizadas
Unreal Engine 5: Motor gráfico para la creación de experiencias visuales y simulaciones en 3D.
Azure Digital Twins: Plataforma para modelar y simular activos físicos, procesos y entornos.
Azure OpenAI: Servicios de IA para interacción natural con el gemelo digital.
Azure PostgreSQL: Base de datos relacional escalable y segura.
Blender: Herramienta para el modelado, texturizado y animación 3D.
Diagrama de Arquitectura en Capas
Requisitos del Sistema
Hardware:
CPU: Procesador de 8 núcleos o más
GPU: Tarjeta gráfica compatible con DirectX 12
RAM: 16 GB o más
Almacenamiento: 10 GB de espacio disponible
Software:
Unreal Engine 5.x
Blender 3.x
Azure CLI (para la configuración de servicios)
Visual Studio 2022 con las herramientas de desarrollo de Unreal Engine
Clonación del Repositorio
git clone https://github.com/MillerMosquera/DigitalBrewery.git
cd DigitalBrewery
Notas Importantes
Important
Asegurar la optimización y sincronización de los datos en tiempo real entre los sistemas físicos y el gemelo digital.
Note
Mantener actualizada la documentación de cada una de las configuraciones y cambios realizados.
This will automatically authenticate the user and retrieve it’s uid which is needed for most of the queries.
The uid is related to the database, it is stored into the RPC client.
use raw_query method
This will return the request body as a string.
It could be parsed using any json library, for example nlohmann/json which is header-only.
Hazmat math implements some basic ECC arithmetic for use with Cryptography.io objects using the OpenSSL backend.
Specifically, _EllipticCurvePrivateKey and _EllipticCurvePublicKey.
Any operations with EC_POINT will return an _EllipticCurvePublicKey and any operations with BN will return an _EllipticCurvePrivateKey.
Usage:
from cryptography.hazmat.backends import default_backend()
from cryptography.hazmat.primitives.asymmetric import ec
from hazmat_math import operations as ops
priv_a = ec.generate_private_keys(ec.SECP256K1(), default_backend())
priv_b = ec.generate_private_keys(ec.SECP256K1(), default_backend())
pub_a = priv_a.public_key()
pub_b = priv_b.public_key()
# Multiplication
priv_c = ops.BN_MOD_MUL(priv_a, priv_b)
pub_c = ops.EC_POINT_MUL(pub_a, priv_a)
# Division
priv_c = ops.BN_DIV(priv_a, priv_b)
# Inversion
inv_a_priv = ops.BN_MOD_INVERSE(priv_a)
inv_a_pub = ops.EC_POINT_INVERT(pub_a)
# Addition
priv_c = ops.BN_MOD_ADD(priv_a, priv_b)
pub_c = ops.EC_POINT_ADD(pub_a, pub_b)
# Subtraction
priv_c = ops.BN_MOD_SUB(priv_a, priv_b)
pub_c = ops.EC_POINT_SUB(pub_a, pub_b)
# Get generator point from curve
gen_point = ops.CURVE_GET_GENERATOR(ec.SECP256K1())
# Get order of curve
order = ops.CURVE_GET_ORDER(ec.SECP256K1())
Installation:
Clone or download the repository
Ensure that you have cryptography.io install (pip install cryptography)
This project allows tracking the amount of time spent on projects.
Categories and subcategories can be used to organize projects. Entries can then be added to these projects. The total time spent on all projects can then be tracked, along with the amount of time spent per project, category and subcategory.
I am currently seeking employment as a developer.
This project shows my ability to understand, work with, and implement the following:
C#
Winforms
SQL Server
Dapper
Separate Core logic from the UI into a library
Use NuGet packages
Dependency Injection
Please feel free to contact me if interested in talking about this project, C#, or employment opportunities. Thanks for reading!
This program will automatically (re-)create the SQLite database if it does not exist.
Unfortunately automatically creating the database turned out to be harder than I expected for SQL Server using Dapper. Due to this the database will not be created automatically. Therefore, if using the SQL Server database option I would recommend publishing the database from the SQL Server database project in the solution. Another option is using the provided script (TimeTrackerDB.sql) after checking to see if any modification are needed for your setup. For SQL Server the connection string will need to be modified in appsettings.json.
The default database is SQLite. Valid settings for “DatabaseType” in appsettings.json are SQLite and MSSQL.
I would love to hear any suggestion on how to automatically create the database, tables and stored procedure if they do not already exists for SQL Server using Dapper.
These are your father’s parentheses.
Elegant weapons for a more… civilized age.
— xkcd/297
Nyoom.nvim was an answer to abstracted and complex codebases that take away end-user extensibility, try to be a one-size-fits-all config, and needlessly lazy load everything. It solves this problem by providing a set of well integrated modules similar to doom-emacs. Modules contain curated plugins and configurations that work together to provide a unified look and feel across all of Nyoom. The end goal of nyoom.nvim is to be used as a framework config for users to extend and add upon, leading to a more unique editing experience.
Nyoom can be anything you’d like. Enable all the modules for the vscode-alternative in you, remove some and turn it into the prose editor of your dreams, or disable everything and have a nice set of macros to start your configuration from scratch!
At its core, Nyoom consists of a set of intuitive macros, a nice standard library, a set of modules, and some opinionated default options, and nothing more.
Designed against the mantras of doom-emacs doom-emacs:
Gotta go fast. Startup and run-time performance are priorities.
Close to metal. There’s less between you and vanilla neovim by design. That’s less to grok and less to work around when you tinker.
Opinionated, but not stubborn. Nyoom (and Doom) are about reasonable defaults and curated opinions, but use as little or as much of it as you like.
Your system, your rules. You know better. At least, Nyoom hopes so! There are no external dependencies (apart from rust), and never will be.
Nix/Guix is a great idea! The Neovim ecosystem is temperamental. Things
break and they break often. Disaster recovery should be a priority! Nyoom’s
package management should be declarative and your private config reproducible,
and comes with a means to roll back releases and updates (still a WIP).
It also aligns with many of Doom’s features:
Minimalistic good looks inspired by modern editors.
A modular organizational structure for separating concerns in your config.
A standard library designed to simplify your fennel bike shedding.
A declarative package management and module system (inspired by use-package, powered by Packer.nvim). Install packages from anywhere, and pin them to any commit.
A Space(vim)-esque keybinding scheme, centered around leader and localleader prefix keys (SPC and SPCm).
Project search (and replace) utilities, powered by ripgrep, and telescope.
Per-file indentation style detection and [editorconfig] integration. Let
someone else argue about tabs vs spaces
Support for modern tooling and navigation through language-servers, null-ls, and tree-sitter.
For more info, checkout our (under construction) FAQ
Prerequisites
Neovim v0.8.1+
Git
Ripgrep 11.0+
Nyoom works best with a modern terminal with Truecolor support. Optionally, you can install Neovide if you’d like a gui.
Nyoom is comprised of optional modules, some of which may have additional dependencies. Run :checkhealth to check for what you may have missed.
Then read getting started to be walked through
installing, configuring and maintaining Nyoom Nvim.
It’s a good idea to add ~/.config/nvim/bin to your PATH! Other bin/nyoom
commands you should know about:
nyoom sync to synchronize your private config with Nyoom by installing missing
packages, removing orphaned packages, and regenerating caches. Run this
whenever you modify your packages.fnl and modules.fnl
nyoom upgrade to update Nyoom to the latest release
nyoom lock to dump a snapshot of your currently installed packages to a lockfile file.
Getting help
Neovim is no journey of a mere thousand miles. You will run into problems and
mysterious errors. When you do, here are some places you can look for help:
If you have an issue with a plugin in Nyoom.nvim, first you should report it here. Please don’t bother package maintainers with issues that are caused by my configs, and vice versa.
Se espera crear un modelo para predecir las habilidades más demandadas en el mercado laboral. Esto puede ayudar a los profesionales, estudiantes y la industria en general a tomar decisiones informadas sobre qué habilidades desarrollar y mejorar.
El objetivo principal del proyecto es analizar las ofertas laborales en el campo de los datos y con el propósito de identificar patrones y tendencias que revelen las habilidades más demandadas en el mercado laboral actual. Esta investigación puede ayudar a las personas interesadas en este ámbito o que quieran meterse a este, ayudando a determinar si es la predicción deseada como una decisión laboral. Todo esto mediante el dataset de kaggle Data Jobs Listings – Glassdoor.
Archivos del repositorio
EDA_project.ipynb: Esta todo el analisis exploratorio de las tablas usadas que son “glassdoor.csv”, “glassdoor_benefits_highlights.csv” y “glassdoor_salary_salaries.csv” (descargar estos csv para la ejecucion del codigo).
ETL_project.pdf: Es el documento en el cual estan descritas las fases realizadas y explicadas a detalles.
dataJobs_script.py: Este el codigo principal en el que se realizo la conexion a postgresql, se cargaron los datos a la BD, se reemplazaron los valores nulos en algunas columnas, campos vacios y normalizaciones para los titulos del trabajo.
df_tocsv_and_transfomations.py: Aqui se tuvo que usar pandas para poder incrustar correctamente el csv y donde se realizo la mayor transformación de datos para la normalizacion del jobTitle.
dimesions_script.py: Aqui se hace la inserción de datos en las ds dimensiones que se va a hacer uso mas en el futuro del proyecto.
project_dashboard.pdf: Es el dashboard que realice en power bi, con tres graficas por ahora.
¿Como correr los scripts?
Clone el repositorio con https://github.com/VinkeArtunduaga/dataJobs_project.git
Instale python 3.11
Instale la base de datos PostgreSQL
Para las librerias es necesario hacer pip install psycopg2, pip install pandas y seran instaladas, tambien se usan json y csv pero se supone que estan predeterminadas.
Crear un usuario y contraseña para el uso de postgreSQL
Crear una database en pgAdmin 4 llamada ETL (asi fue como le puse el nombre a la mia pero se puede cambiar)
Cambie las configuraciones de la conexion a la base de datos segun el usuario, contraseña y database asignados
Corra primero el codigo de df_tocsv_and_transformations.py en la terminal mediante python df_tocsv_and_transformations.py
Luego python dataJobs_script.py para de esta manera crear la tabla principal con sus normalizaciones y limpieza de nulos.
Para luego correr el de la creacion de las dimesiones python dimensions_script.py
En caso de querer realizar el proceso de EDA:
Descargar Jupyter para mas facilidad con Jupyter lab
Al ya tener descargadas las librerias de json y pandas o no haber podido ser instaladas ejecutar en la terminal pip install pandas y pip install json
Cambiar la direccion de donde se encuentran los archivos csv de glassdoor.csv, glassdoor_benefits_highlights.csv y glassdoor_salary_salaries.csv.
Al ejecutar cada uno de los bloques o de corrido deberia apreciarse el analisis.
Para la segunda parte del proyecto hice una carpeta llama API ahi estan todo lo que se realizo.
❗ Caution: When adding images, label or similar, this will reduce the readability of the QR-code. Consider using a higher error correction level (e.g. L) in those cases.
Crisp
As you can set the size of the image, the amount of ‘modules’ (black/white boxes that make up the QR-code) is calculated based on the size and the amount of quiet modules. The calculation can result in an odd number so that a module is e.g. 4.5 pixels big. The resulting image will be drawn fuzzy if crisp is set to false. Setting it to true will result in ‘sharp’ lines.
crisp false
crisp true
Label
Kjua lets you embed a text or image to the code. This can be set with the setting mode.
This can reduce the readability of the code!
Image
Image as Code
Clear + Image
This mode let’s you “cut out” parts of the QR-code and at the same time add an image.
labelimage, imagelabel and clearimage
Use this, if you want a label AND an image. In these modes mSize, mPosX and mPosY can be provided as an array.
In mode labelimage, the first value (index 0) of the mSize, mPosX and mPosY arrays is used for the label,
the second value (index 1) is used for image and vice versa. Also in labelimage mode, the label is drawn before the
image is drawn and therefore kinda “in the background” if the two overlap.
All options
text encoded content (defaults to “)
render render-mode: ‘image’, ‘canvas’, ‘svg’ (defaults to image)
crisp render pixel-perfect lines (defaults to true)
minVersion minimum version: 1..40 (defaults to 1)
ecLevel error correction level: ‘L’, ‘M’, ‘Q’ or ‘H’ (defaults to L)
size size in pixel (defaults to 200, min 24 or higher, depend on how much character you’re using)
fill code color (defaults to #333)
back background color (defaults to #fff, for transparent use '' or null)
rounded roundend corners in pc: 0..100 (defaults to 0, not working if renderis set to svg)
quiet quiet zone in modules (defaults to 0)
mode modes: ‘plain’, ‘label’, ‘image’ or ‘clear’ (defaults to plain, set label or image property if you change this)
mSize label/image size in pc: 0..100 (defaults to 30)
mPosX label/image pos x in pc: 0..100 (defaults to 50)
mPosY label/image pos y in pc: 0..100 (defaults to 50)
label additional label text (defaults to “)
fontname font for additional label text (defaults to sans-serif)
fontcolor font-color for additional label text (defaults to #333)
fontoutline draw an outline on the label text in the color of the back (defaults to true)
image additional image (defaults to undefined, use an HTMLImageElement or base64-string)
imageAsCode draw the image as part of the code (defaults to false)
renderAsync weather or not rendering is done inside “requestAnimationFrame”-call (defaults to false, use true if you want to generate more than one code (e.g. batch))
cssClass additional css-class that will be appended to the div-container that contains the qr-code (defaults to undefined)
If you plan to render more than one barcode (e.g. batch-generation) I recommend using renderAsync-flag. It executes the rendering inside a “requestAnimationFrame”-call.
The component comes with a helper-class (QrCodeHelper), that helps you with generating Codes that have information like a Contact encoded.
Currently it supports the generation of:
SMS: number with optional pre-defined text
Call
Geo-Information: a point on the map with Latitude and Longitude
Events
Email: recipient with an optional subject and text
WiFi: SSID with optional password and a flag for hidden WiFis
Contact Information: name with optional address, telephone-number(s), email, url.
Contact Encoding is done with MECard-format and NOT VCard! VCard gives a longer string and therefore a
bigger code which potentially has a negative impact on readability for scanners.
You can, of course, create a VCard string as well but the format is more complex.
Generate PDF
See the example above.
It works with pure kjua and has in fact nothing to do with ngx-kjua but I thought somebody might find it useful.
This repository contains a number of programs I made in C++ to practice data structures and algorithms. Most of the questions, at the moment are from Coding Block’s Launchpad course. All the .cpp files contain the problem statement in a comment in the starting of the program.
Arrays
This folder contains programs using:-
1-D Arrays
2-D Arrays
Strings
[BOTH STATIC AND DYNAMIC]
Bitmasking
This folder contains programs that:-
solve problems more efficiently by manipulating the bits of the provided data.
Fundamentals
This folder contains programs about:-
Basic concepts such as:
Datatypes
Type conversions
Pointers
Loops
Patterns
Different types of operators, including:
Arithmetic
Logical
Bitwise
Logical
Conditional
Miscellaneous
Dereference (*)
Address of (&)
Linked Lists
This folder contains problems regarding linked lists such as:-
Sorting
Searching
Insertion
Deletion
Both doubly and singly linked lists are used. Linked lists have been implemented through self referential structures / classes.
Searching and Sorting
This folder contains programs using:-
Linear Search
Binary Search
Bubble Sort
Selection Sort
Insertion Sort
Wave Sort
Counting Sort
Inbuilt sort function
Number Theory
This folder consists of programs made using the basics of number theory
Recursion
In this folder, a variety of different problems and algorithms have been implemented using recursion.
The concept of backtracking is also used in some problems.
Searching and Sorting
This folder consists of various algorithms used for searching and sorting
Linear Search
Binary Search
Bubble Sort
Counting Sort
Insertion Sort
Wave Sort
note:- Programs for merge sort and quick sort are in the recursion folder since they have been implemented recursively and not iteratively.
Stacks and Queues
This folder consists of programs having practical usage of stacks and queus and the implementation of stacks and queues using:-
To build your own library, simply run the following maven command:
$ mvn clean install
# Test cases may not run properly if you do not already have memcached
# and ZooKeeper installed on the local machine. To skip tests, use skipTests.
$ mvn clean install -DskipTests=true
Running Test Cases
Before running test cases, make sure to set up a local ZooKeeper and run
an Arcus memcached instance. Several Arcus specific test cases assume that
there is an Arcus instance running, along with ZooKeeper.
First, make a simple ZooKeeper configuration file. By default, tests assume
ZooKeeper is running at localhost:2181.
$ cat test-zk.conf
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/home1/openarcus/zookeeper_data
# the port at which the clients will connect
clientPort=2181
maxClientCnxns=200
Second, create znodes for one memcached instance running at localhost:11211.
ZooKeeper comes with a command line tool. The following script uses it to
set up the directory structure.
Arcus has patents on b+tree smget operation.
Refer to PATENTS file in this directory to get the patent information.
Under the Apache License 2.0, a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable patent license is granted to any user for any usage.
You can see the specifics on the grant of patent license in LICENSE file in this directory.