[{"categories":null,"contents":"This is a short tutorial on connecting a zigbee device (an Aqara cube) to an MQTT server, so you can control your zigbee devices from the network.\nIf you\u0026rsquo;re anything like me, you\u0026rsquo;re probably a sucker for IoT devices. For a long time, I\u0026rsquo;ve been using WiFi-enabled lights, and Amazon dash buttons to control them. To keep these (cheap Chinese) internet enabled devices away from your network and their respective cloud services, you\u0026rsquo;ll probably want to set up a dedicated network in your router (more on this on a future post, maybe). Another disadvantage of WiFi devices is that they\u0026rsquo;re relatively power hungry.\nA popular alternative is using ZigBee for communication. It is a dedicated protocol similar to bluetooth (BLE), with lower power requirements and bitrate.\nTake the (super cute) aqara cube as an example. It is a small cube that detects rotation on all of its axes, and tapping events. Here\u0026rsquo;s a video:\n To connect to zigbee devices you will need a zigbee enabled gateway (a.k.a. hub), which connects to your WiFi network and your zigbee devices. Once again, this means adding an internet-enabled device to your home, and probably a couple of cloud services.\nAs an alternative, you can set up your own zigbee gateway, and control it to your home automation platform of choice (e.g. home assistant). We will cover how to set up a zigbee2mqtt gateway that is also connected to an MQTT server, so you can use MQTT to control your devices and get notifications.\nWhat you need:\n Aqara cube. CC2531 zigbee sniffer. CC-debugger. You will need to flash your sniffer. For that, you only need to follow the instructions from the zigbee2mqtt documentation.\nOnce you\u0026rsquo;re done flashing, you\u0026rsquo;re ready to set up the zigbee2mqtt server. For convenience, I wrote a simple docker-compose to deploy a zigbee2mqtt server and a test mosquitto server:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 version: \u0026#39;2.1\u0026#39; services: zigbee2mqtt: image: koenkk/zigbee2mqtt container_name: zigbee2mqtt restart: always volumes: - ./z2m-data/:/app/data/ devices: - \u0026#34;/dev/ttyACM0\u0026#34; networks: - hass mqtt: image: eclipse-mosquitto ports: - 1883:1883 - 9001:9001 networks: - hass volumes: - ./mosquitto.conf:/mosquitto/config/mosquitto.conf networks: hass: driver: overlay You can test your installation with:\n❯ mosquitto_sub -h localhost -p 1883 -t \u0026#39;zigbee2mqtt/#\u0026#39; online {\u0026#34;battery\u0026#34;:17,\u0026#34;voltage\u0026#34;:2925,\u0026#34;linkquality\u0026#34;:149,\u0026#34;action\u0026#34;:\u0026#34;rotate_right\u0026#34;,\u0026#34;angle\u0026#34;:12.8} {\u0026#34;battery\u0026#34;:17,\u0026#34;voltage\u0026#34;:2925,\u0026#34;linkquality\u0026#34;:141,\u0026#34;action\u0026#34;:\u0026#34;slide\u0026#34;,\u0026#34;side\u0026#34;:2} {\u0026#34;battery\u0026#34;:17,\u0026#34;voltage\u0026#34;:2925,\u0026#34;linkquality\u0026#34;:120} {\u0026#34;battery\u0026#34;:17,\u0026#34;voltage\u0026#34;:2925,\u0026#34;linkquality\u0026#34;:141,\u0026#34;action\u0026#34;:\u0026#34;wakeup\u0026#34;} zigbee2mqtt supports the following events for the aqara cube: shake, wakeup, fall, tap, slide, flip180, flip90, rotate_left and rotate_right. Every event has additional information, such as the sides involved, or the degrees turned.\nNow you are ready to set up home assistant support in zigbee2mqtt following this guide.\n","permalink":"/post/2019-01-06-zigbee2mqtt/","tags":["mqtt","iot","zigbee"],"title":"Controlling Zigbee devices with MQTT"},{"categories":null,"contents":"tqdm is a nice way to add progress bars in the command line or in a jupyter notebook.\n1 2 3 4 5 from tqdm import tqdm import time for i in tqdm(range(100)): time.sleep(1) ","permalink":"/post/2016-09-28-tqdm/","tags":["python"],"title":"Progress bars in python"},{"categories":null,"contents":"Today\u0026rsquo;s post is half a quick note, half public shaming. In other words, it is a reminder to be very careful with OAuth tokens and passwords.\nAs part of moving to emacs, I starting using the incredibly useful gh.el. When you first use it, the extension saves either your password or an OAuth token in your .gitconfig file. This is cool and convenient, unless you happen to be publishing your .gitconfig file in a public repo.\nSo, how can you still share your gitconfig without sharing your password/token with the rest of the world? Since Git 1.7.0, you can include other files in your gitconfig.\n1 2 [include] path = ~/.gitconfig_secret And now, in your .gitconfig_secret file, you just have to add this:\n1 2 3 [github] user = balkian token = \u0026#34;\u0026lt; Your secret token \u0026gt;\u0026#34; ","permalink":"/post/2015-04-10-github-dotfiles/","tags":["github","git","dotfiles"],"title":"Sharing dotfiles"},{"categories":null,"contents":" Zotero is an Open Source tool that lets you organise your bibliography, syncing it with the cloud. Unlike other alternatives such as Mendeley, Zotero can upload the attachments and data to a private cloud via WebDav.\nIf you use nginx as your web server, know that even though it provides partial support for webdav, Zotero needs more than that. Hence, you will need another webdav server, and optionally let nginx proxy to it. This short post provides the basics to get that set-up working under Debian/Ubuntu.\nSetting up Apache First we need to install Apache:\n1 sudo apt-get install apache2 Change the head of \u0026ldquo;/etc/apache2/sites-enabled/000-default\u0026rdquo; to:\n1 \u0026lt;VirtualHost *:880\u0026gt; Then, create a file /etc/apache2/sites-available/webdav:\n1 2 3 4 5 6 7 8 9 10 11 12 13 Alias /dav /home/webdav/dav \u0026lt;Location /dav\u0026gt; Dav on Order Allow,Deny Allow from all Dav On Options +Indexes AuthType Basic AuthName DAV AuthBasicProvider file AuthUserFile /home/webdav/.htpasswd Require valid-user \u0026lt;/Location\u0026gt; Ideally, you want your webdav folders to be private, adding authentication to them. So you need to create the webdav and zotero users and add the passwords to an htpasswd file. Even though you could use a single user, since you will be configuring several clients with your credentials I encourage you to create the zotero user as well. This way you can always change the password for zotero without affecting any other application using webdav.\n1 2 3 4 sudo adduser webdav sudo htpasswd -c /home/webdav/.htpasswd webdav sudo htpasswd /home/webdav/.htpasswd zotero sudo mkdir -p /home/webdav/dav/zotero Enable the site and restart apache:\n1 2 3 4 sudo a2enmod webdav sudo a2enmod dav_fs sudo a2ensite webdav sudo service apache2 restart At this point everything should be working at http://:880/dav/zotero\nSetting up NGINX After the Apache side is working, we can use nginx as a proxy to get cleaner URIs. In your desired site/location, add this:\n1 2 3 4 5 6 7 location /dav { client_max_body_size 20M; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:880; } Now just reload nginx:\n1 sudo service nginx force-reload Extras Zotero Reader - HTML5 client Zandy - Android Open Source client ","permalink":"/post/2014-12-09-zotero/","tags":["zotero","webdav","nginx","apache"],"title":"Zotero"},{"categories":null,"contents":"This is a quick note on proxying a local python application (e.g. flask) to a subdirectory in Apache. This assumes that the file wsgi.py contains a WSGI application with the name application. Hence, wsgi:application.\nGunicorn 1 2 3 4 5 \u0026lt;Location /myapp/\u0026gt; ProxyPass http://127.0.0.1:8888/myapp/ ProxyPassReverse http://127.0.0.1:8888/myapp/ RequestHeader set SCRIPT_NAME \u0026#34;/myapp/\u0026#34; \u0026lt;/Location\u0026gt; Important: SCRIPT_NAME and the end of ProxyPass URL MUST BE THE SAME. Otherwise, Gunicorn will fail miserably.\nTry it with:\n1 venv/bin/gunicorn -w 4 -b 127.0.0.1:8888 --log-file - --access-logfile - wsgi:application UWSGI This is a very simple configuration. I will try to upload one with more options for uwsgi (in a .ini file).\n1 2 3 4 \u0026lt;Location /myapp/\u0026gt; SetHandler uwsgi_handler uWSGISocker 127.0.0.1:8888 \u0026lt;/Location\u0026gt; Try it with:\n1 uwsgi --socket 127.0.0.1:8888 -w wsgi:application Extra: Supervisor If everything went as expected, you can wrap your command in a supervisor config file and let it handle the server for you.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [unix_http_server] file=/tmp/myapp.sock ; path to your socket file [supervisord] logfile = %(here)s/logs/supervisor.log childlogdir = %(here)s/logs/ [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] logfile = %(here)s/logs/supervisorctl.log serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket [program:myapp] command = venv/bin/gunicorn -w 4 -b 0.0.0.0:5000 --log-file %(here)s/logs/gunicorn.log --access-logfile - wsgi:application directory = %(here)s environment = PATH=%(here)s/venv/bin/ logfile = %(here)s/logs/myapp.log ","permalink":"/post/2014-10-09-proxies/","tags":["python","apache","proxy","gunicorn","uwsgi"],"title":"Proxies with Apache and python"},{"categories":null,"contents":" Developing a python module and publishing it on Github is cool, but most of the times you want others to download and use it easily. That is the role of PyPi, the python package repository. In this post I show you how to publish your package in less than 10 minutes.\nChoose a fancy name If you haven\u0026rsquo;t done so yet, take a minute or two to think about this. To publish on PyPi you need a name for your package that isn\u0026rsquo;t taken. What\u0026rsquo;s more, a catchy and unique name will help people remember your module and feel more inclined to at least try it.\nThe package name should hint what your module does, but that\u0026rsquo;s not always the case. That\u0026rsquo;s your call. I personally put uniqueness and memorability over describing the functionality.\nCreate a .pypirc configuration file 1 2 3 4 5 6 7 8 9 10 11 12 13 [distutils] # this tells distutils what package indexes you can push to index-servers =pypi # the live PyPI pypitest # test PyPI [pypi] # authentication details for live PyPI repository = https://pypi.python.org/pypi username = { your_username } password = { your_password } # not necessary [pypitest] # authentication details for test PyPI repository = https://testpypi.python.org/pypi username = { your_username } As you can see, you need to register both in the main pypi repository and the testing server. The usernames and passwords might be different, that is up to you!\nPrepare your package This should be the structure:\nroot-dir/ # Any name you want setup.py setup.cfg LICENSE.txt README.md mypackage/ __init__.py foo.py bar.py baz.py setup.cfg 1 2 [metadata] description-file = README.md The markdown README is the de facto standard in Github, but you can also use rST (reStructuredText), the standard in the python community.\nsetup.py 1 2 3 4 5 6 7 8 9 10 11 12 from distutils.core import setup setup(name = \u0026#39;mypackage\u0026#39;, packages = [\u0026#39;mypackage\u0026#39;], # this must be the same as the name above version = \u0026#39;{ version }\u0026#39;, description = \u0026#39;{ description }\u0026#39;, author = \u0026#39;{ name }\u0026#39;, email = \u0026#39;{ email }\u0026#39;, url = \u0026#39;https://github.com/{user}/{package}\u0026#39;, # URL to the github repo download_url = \u0026#39;https://github.com/{user}/{repo}/tarball/{version}\u0026#39;, keywords = [\u0026#39;websockets\u0026#39;, \u0026#39;display\u0026#39;, \u0026#39;d3\u0026#39;], # list of keywords that represent your package classifiers = [], ) You might notice that the download_url points to a Github URL. We could host our package anywhere, but Github is a convenient option. To create the tarball and the zip packages, you only need to tag a tag in your repository and push it to github:\n1 2 git tag {version} -m \u0026#34;{ Description of this tag/version}\u0026#34; git push --tags origin master Push to the testing/main pypi server It is advisable that you try your package on the test repository and fix any problems first. The process is simple:\n1 python setup.py register -r {pypitest/pypi} python setup.py sdist upload -r {pypitest/pypi} If everything went as expected, you can now install your package through pip and browse your package\u0026rsquo;s page. For instance, check my senpy package: https://pypi.python.org/pypi/senpy\n1 pip install senpy ","permalink":"/post/2014-09-23-publishing-to-pypi/","tags":["github","python","pypi"],"title":"Publishing on PyPi"},{"categories":null,"contents":" As part of the OpeNER hackathon we decided to build a prototype that would allow us to compare how different countries feel about several topics. We used the OpeNER pipeline to get the sentiment from a set of newspaper articles we gathered from media in several languages. Then we aggregated those articles by category and country (using the source of the article or the language it was written in), obtaining the \u0026ldquo;overall feeling\u0026rdquo; of each country about each topic. Then, we used some fancy JavaScript to make sense out of the raw information.\nIt didn\u0026rsquo;t go too bad, it turns out we won.\nNow, it was time for a face-lift. I used this opportunity to play with new technologies and improve it:\n Using Flask, this time using python 3.3 and Bootstrap 3.0 Cool HTML5+JS cards (thanks to pastetophone) Automatic generation of fake personal data to test the interface Obfuscation of personal emails The result can be seen here.\nPublishing a Python 3 app on Heroku 1 mkvirtualenv -p /usr/bin/python3.3 eurolovemap Since Heroku uses python 2.7 by default, we have to tell it which version we want, although it supports python 3.4 as well. I couldn\u0026rsquo;t get python 3.4 working using the deadsnakes ppa, so I used python 3.3 instead, which works fine but is not officially supported. Just create a file named runtime.txt in your project root, with the python version you want to use:\n1 python-3.3.1 Don\u0026rsquo;t forget to freeze your dependencies so Heroku can install them: bash pip freze \u0026gt; requirements.txt\nPublishing personal emails There are really sophisticated and effective ways to obfuscate personal emails so that spammers cannot easily grab yours. However, this time I needed something really simple to hide our emails from the simplest form of crawlers. Most of the team are in academia somehow, so in the end all our emails are available in sites like Google Scholar. Anyway, nobody likes getting spammed so I settled for a custom Caesar cipher. Please, don\u0026rsquo;t use it for any serious application if you are concerned about being spammed.\n1 2 def blur_email(email): return \u0026#34;\u0026#34;.join([chr(ord(i)+5) for i in email]) And this is the client side:\n1 2 3 4 5 6 7 8 9 10 11 12 13 window.onload = function(){ elems = document.getElementsByClassName(\u0026#39;profile-email\u0026#39;); for(var e in elems){ var blur = elems[e].innerHTML; var email = \u0026#34;\u0026#34;; for(var s in blur){ var a = blur.charCodeAt(s) email = email+String.fromCharCode(a-5); } elems[e].innerHTML = email; } } Unfortunately, this approach does not hide your email from anyone using PhantomJS, ZombieJS or similar. For that, other approaches like generating a picture with the address would be necessary. Nevertheless, it is overkill for a really simple ad-hoc application with custom formatting and just a bunch of emails that would easily be grabbed manually.\nGeneration of fake data To test the contact section of the site, I wanted to populate it with fake data. Fake-Factory is an amazing library that can generate fake data of almost any kind: emails, association names, acronyms\u0026hellip; It even lets you localise the results (get Spanish names, for instance) and generate factories for certain classes (à la Django).\nBut I also wanted pictures, enter Lorem Pixel. With its API you can generate pictures of almost any size, for different topics (e.g. nightlife, people) and with a custom text. You can even use an index, so it will always show the same picture.\nFor instance, the picture below is served through Lorem Pixel.\nBy the way, if you only want cat pictures, take a look at Placekitten. And for NSFW text, there\u0026rsquo;s the Samuel L. Jackson Ipsum\n","permalink":"/post/2014-03-27-updating-eurolovemap/","tags":["javascript","python","heroku"],"title":"Updating EuroLoveMap"},{"categories":null,"contents":"A simple trick. If you want to remove all the \u0026lsquo;.swp\u0026rsquo; files from a git repository, just use:\n1 git rm --cached \u0026#39;**.swp\u0026#39; ","permalink":"/post/2013-08-22-remove-git-files-with-globbing/","tags":["git"],"title":"Remove git files with globbing"},{"categories":null,"contents":"I\u0026rsquo;ve finally decided to set up a decent personal page. I have settled for github-pages because I like the idea of keeping my site in a repository and having someone else host and deploy it for me. The site will be really simple, mostly static files. Thanks to Github, Jekyll will automatically generate static pages for my posts every time I commit anything new to this repository.\nBut Jekyll can be used independently, so if I ever choose to host the site myself, I can do it quite easily. Another thing that I liked about this approach is that the generated html files can be used in the future, and I will not need Jekyll to serve it. Jekyll is really simple and most of the things are written in plain html. That means that everything could be easily reused if I ever choose to change to another blogging framework (e.g. pelical). But, for the time being, I like the fact that Github takes care of the compilation as well, so I can simply modify or add files through the web interface should I need to.\nI hadn\u0026rsquo;t played with HTML and CSS for a while now, so I also wanted to use this site as a playground. At some point, I realised I was doing mostly everything in plain HTML and CSS, and decided to keep it like that for as long as possible. As of this writing, I haven\u0026rsquo;t included any Javascript code in the page. Probably I will use some to add my gists and repositories, but we will see about that.\nI think the code speaks for itself, so you can check out my repository on Github. You can clone and deploy it easily like this:\n1 2 3 git clone https://github.com/balkian/balkian.github.com cd balkian.github.com jekyll serve -w I will keep updating this post with information about:\n Some Jekyll plugins that might be useful What CSS tricks I learnt The webfonts I used The badge on the left side of the page ","permalink":"/post/2013-08-17-creating-my-web/","tags":["starters","javascript","ruby","github","git"],"title":"Creating my web"},{"categories":null,"contents":"Show plain text version 1 (font-lock-mode) ","permalink":"/cheatsheet/emacs/","tags":["emacs","org","productivity","lisp"],"title":"Emacs"},{"categories":null,"contents":" Ongoing Projects Senpy: a framework for semantic sentiment and emotion analysis services. Soil: an agent-based simulator for social networks based on nx-sim and networkx. Onyx: an ontology for emotion analysis that includes concepts from W3C\u0026rsquo;s provenance. Past Projects Marl: I updated this ontology, originally created by Adam Westerski, to make it compatible with the W3C\u0026rsquo;s provenance ontology. Hermes: one of my first projects, developed together with David Pérez as the special custom assignment in one of our courses. Hermes is an affective bot designed to mimic the behavour of humans. It included a plug-in system for its sensors and actuators. The information from its sensors changed its emotional state, which was shown via its actuators. Among others, it could fetch inforation from Twitter or its host system and change the expressions of an external Face made with servo motors or speak via its Text-To-Speech software. For instance, it could detect it was running out of battery, showing a sad face and sending an alerting tweet. You can see it in action in these two youtube videos: Part 1 and Part 2. Maia: the Modular Architecture for Intelligent Agents is an evented agent architecture that aims to update the classical frameworks for intelligent agents with the concepts emerged from the Live Web. EESTEC.net: the Plone based official portal of EESTEC. It has been my first and only experience with Plone. I fixed some bugs and implemented basic features. For more information, check my list of public repositories in Github.\n","permalink":"/project/","tags":null,"title":"Index of projects"},{"categories":null,"contents":" Black screen and LightDM doesn\u0026rsquo;t unlock Add this to your /etc/lightdm/lightdm.conf file:\n[LightDM] logind-check-graphical=true Edit previous commands fc is a shell builtin to list and edit previous commands in an editor. In addition to editing a single line (which you can also do with C-x C-e), it also allows you to edit and run several lines at the same time. You use it like this:\nList previous commands\n$ fc -l 10259 nvim deploy.sh 10260* cd .. 10261* nvim content/cheatsheet/linux.md 10262 cd List commands with date (in zsh)\n$ fc -ld 10260* 19:38 cd .. 10261* 19:38 nvim content/cheatsheet/linux.md 10262 19:40 cd 10263 19:40 fc -l You can add the date too:\n$ fc -fld 10262 1/10/2019 19:40 cd 10263 1/10/2019 19:40 fc -l 10264 1/10/2019 19:40 fc -ld You can edit a range of commands\n$ fc 10262 10264 The range can be relative to the current position, so the previous command is equivalent to:\n$ fc -3 -1 If you save and exit, all commands are executed as a script, and it will be added to your history.\nSource: https://shapeshed.com/unix-fc/\n","permalink":"/cheatsheet/linux/","tags":["linux","arch"],"title":"Linux"},{"categories":null,"contents":" Interesting libraries TQDM From tqdm\u0026rsquo;s github repository:\n tqdm means \u0026ldquo;progress\u0026rdquo; in Arabic (taqadum, تقدّم) and an abbreviation for \u0026ldquo;I love you so much\u0026rdquo; in Spanish (te quiero demasiado).\n ","permalink":"/cheatsheet/python/","tags":["python","programming"],"title":"Python"},{"categories":null,"contents":" HDMI flickering Avoid HDMI flickering/intermittent blanking on RPI with a 1400x1050 VGA monitor.\n1 2 3 4 5 hdmi_drive=2 hdmi_group=2 hdmi_mode=42 disable_overscan=1 config_hdmi_boost=7 ","permalink":"/cheatsheet/rpi/","tags":["rpi"],"title":"Raspberry Pi"},{"categories":null,"contents":" This file exists solely to respond to /search URL with the related search layout template.\nNo content shown here is rendered, all content is based in the template layouts/page/search.html\nSetting a very low sitemap priority will tell search engines this is not important content.\nThis implementation uses Fusejs, jquery and mark.js\nInitial setup Search depends on additional output content type of JSON in config.toml ` [outputs] home = [\u0026quot;HTML\u0026quot;, \u0026quot;JSON\u0026quot;] \\`\nSearching additional fileds To search additional fields defined in front matter, you must add it in 2 places.\nEdit layouts/_default/index.JSON This exposes the values in /index.json i.e. add category ` ... \u0026quot;contents\u0026quot;:{{ .Content | plainify | jsonify }} {{ if .Params.tags }}, \u0026quot;tags\u0026quot;:{{ .Params.tags | jsonify }}{{end}}, \u0026quot;categories\u0026quot; : {{ .Params.categories | jsonify }}, ... \\`\nEdit fuse.js options to Search static/js/search.js ` keys: [ \u0026quot;title\u0026quot;, \u0026quot;contents\u0026quot;, \u0026quot;tags\u0026quot;, \u0026quot;categories\u0026quot; ] \\`\n ${title} ${snippet}\n \n","permalink":"/search/","tags":null,"title":"Search Results"},{"categories":null,"contents":" PhD Write my first workshop paper as main author Write my first journal paper Write my first book chapter Chair a W3C Community Group Collaborate in a W3C recommendation Become a doctor! Technical Write a NodeJS App. Maia [See ISSUES] Write my first Django Application Develop a distributed LibP2P golang application Github repo with +100 stars Languages English Chinese Greek German Esperanto Personal Run a 10k Blog regularly for a year ","permalink":"/page/todo/","tags":null,"title":"To-do"}]