codingdirectional: The final user interface of this video editing application

Sorry for not posting anything yesterday as I had suffered from the flu and today although I still feel a little bit tired and sick I need to at least post a post on this site. After a day of hard work I have finally finished to tidy up the user interface of this video editing application. I wish there is no bug in the logic part of this revised program. If you find any bug in the below program do let me know about it.

 from tkinter import * from tkinter import filedialog import os import subprocess import tkinter.ttk as tk  win = Tk() # Create tk instance win.title("NeW Vid") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change background color  mainframe = Frame(win) # create a frame mainframe.pack()  eqFrame = Frame(win) # create eq frame eqFrame.pack(side = TOP, fill=X)  animatedFrame = Frame(win) # create animated frame animatedFrame.pack(side = TOP, fill=X)  trimFrame = Frame(win) # create trim frame trimFrame.pack(side = TOP, fill=X)  buttonFrame = Frame(win) # create a button frame buttonFrame.pack(side = BOTTOM, fill=X, pady = 6)  # Create a label and scale box for eq contrast_variable = DoubleVar() contrast = Scale(eqFrame, from_=float(-2.00), to=float(2.00), orient=HORIZONTAL, label="CONTRAST", digits=3, resolution=0.01, variable=contrast_variable) contrast.set(1) contrast.pack(side = LEFT) brightness_variable = DoubleVar() brightness = Scale(eqFrame, from_=float(-1.00), to=float(1.00), orient=HORIZONTAL, label="BRIGHTNESS", digits=3, resolution=0.01, variable=brightness_variable) brightness.pack(side = LEFT) saturation_variable = DoubleVar() saturation = Scale(eqFrame, from_=float(0.00), to=float(3.00), orient=HORIZONTAL, label="SATURATION", digits=3, resolution=0.01, variable=saturation_variable) saturation.set(1) saturation.pack(side = LEFT) gamma_variable = DoubleVar() gamma = Scale(eqFrame, from_=float(0.10), to=float(10.00), orient=HORIZONTAL, label="GAMMA", digits=4, resolution=0.01, variable=gamma_variable) gamma.set(1) gamma.pack(side = LEFT) loop_variable = DoubleVar() loop = Scale(eqFrame, from_=float(0), to=float(10), orient=HORIZONTAL, label="REPEAT", digits=2, resolution=1, variable=loop_variable) loop.pack(side = LEFT) fr_variable = DoubleVar() fr = Scale(eqFrame, from_=float(9), to=float(60), orient=HORIZONTAL, label="FPS", digits=2, resolution=1, variable=fr_variable) fr.set(24) fr.pack(side = LEFT)  #create animated gif anime = Label(animatedFrame, text="Create Animated Image from Video   ") anime.pack(side = TOP) anime.pack(side = LEFT)  from_ = Label(animatedFrame, text="Start From (hour : minute : second)  ") from_.pack(side = BOTTOM) from_.pack(side = LEFT) from_t_h_varable = StringVar() from_t_h = Entry(animatedFrame, width=3, textvariable=from_t_h_varable) from_t_h.pack(side=BOTTOM) from_t_h.pack(side=LEFT) from_m = Label(animatedFrame, text=" : ") from_m.pack(side = BOTTOM) from_m.pack(side = LEFT) from_t_m_varable = StringVar() from_t_m = Entry(animatedFrame, width=3,textvariable=from_t_m_varable) from_t_m.pack(side=BOTTOM) from_t_m.pack(side=LEFT) from_s = Label(animatedFrame, text=" : ") from_s.pack(side = BOTTOM) from_s.pack(side = LEFT) from_t_s_varable = StringVar() from_t_s = Entry(animatedFrame, width=3,textvariable=from_t_s_varable) from_t_s.pack(side=BOTTOM) from_t_s.pack(side=LEFT)  to_ = Label(animatedFrame, text="  To (in second)  ") to_.pack(side = BOTTOM) to_.pack(side = LEFT) #to_t_h_varable = StringVar() #to_t_h = Entry(animatedFrame, width=3,textvariable=to_t_h_varable) #to_t_h.pack(side=BOTTOM) #to_t_h.pack(side=LEFT) #to_m = Label(animatedFrame, text=" : ") #to_m.pack(side = BOTTOM) #to_m.pack(side = LEFT) #to_t_m_varable = StringVar() #to_t_m = Entry(animatedFrame, width=3,textvariable=to_t_m_varable) #to_t_m.pack(side=BOTTOM) #to_t_m.pack(side=LEFT) #to_s = Label(animatedFrame, text=" : ") #to_s.pack(side = BOTTOM) #to_s.pack(side = LEFT) to_t_s_varable = StringVar() to_t_s = Entry(animatedFrame, width=3,textvariable=to_t_s_varable) to_t_s.pack(side=BOTTOM) to_t_s.pack(side=LEFT)   #trim video trim = Label(trimFrame, text="Trim Video   ") trim.pack(side = TOP) trim.pack(side = LEFT)  trim_from_ = Label(trimFrame, text="Start From (hour : minute : second)  ") trim_from_.pack(side = BOTTOM) trim_from_.pack(side = LEFT) trim_from_t_h_varable = StringVar() trim_from_t_h = Entry(trimFrame, width=3, textvariable=trim_from_t_h_varable) trim_from_t_h.pack(side=BOTTOM) trim_from_t_h.pack(side=LEFT) trim_from_m = Label(trimFrame, text=" : ") trim_from_m.pack(side = BOTTOM) trim_from_m.pack(side = LEFT) trim_from_t_m_varable = StringVar() trim_from_t_m = Entry(trimFrame, width=3,textvariable=trim_from_t_m_varable) trim_from_t_m.pack(side=BOTTOM) trim_from_t_m.pack(side=LEFT) trim_from_s = Label(trimFrame, text=" : ") trim_from_s.pack(side = BOTTOM) trim_from_s.pack(side = LEFT) trim_from_t_s_varable = StringVar() trim_from_t_s = Entry(trimFrame, width=3,textvariable=trim_from_t_s_varable) trim_from_t_s.pack(side=BOTTOM) trim_from_t_s.pack(side=LEFT)  trim_to_ = Label(trimFrame, text="  To (in second)  ") trim_to_.pack(side = BOTTOM) trim_to_.pack(side = LEFT) trim_to_t_h_varable = StringVar() trim_to_t_h = Entry(trimFrame, width=3,textvariable=trim_to_t_h_varable) trim_to_t_h.pack(side=BOTTOM) trim_to_t_h.pack(side=LEFT) trim_to_m = Label(trimFrame, text=" : ") trim_to_m.pack(side = BOTTOM) trim_to_m.pack(side = LEFT) trim_to_t_m_varable = StringVar() trim_to_t_m = Entry(trimFrame, width=3,textvariable=trim_to_t_m_varable) trim_to_t_m.pack(side=BOTTOM) trim_to_t_m.pack(side=LEFT) trim_to_s = Label(trimFrame, text=" : ") trim_to_s.pack(side = BOTTOM) trim_to_s.pack(side = LEFT) trim_to_t_s_varable = StringVar() trim_to_t_s = Entry(trimFrame, width=3,textvariable=trim_to_t_s_varable) trim_to_t_s.pack(side=BOTTOM) trim_to_t_s.pack(side=LEFT)  # Create a combo box vid_size = StringVar() # create a string variable preferSize = tk.Combobox(mainframe, textvariable=vid_size)  preferSize['values'] = (1920, 1280, 854, 640) # video width in pixels preferSize.current(0) # select item one  preferSize.pack(side = LEFT)  # Create a combo box vid_format = StringVar() # create a string variable preferFormat = tk.Combobox(mainframe, textvariable=vid_format)  preferFormat['values'] = ('.mp4', '.webm', '.avi', '.wmv', '.mpg', '.ogv') # video format preferFormat.current(0) # select item one  preferFormat.pack(side = LEFT)  removeAudioVal = IntVar() removeAudio = tk.Checkbutton(mainframe, text="Remove Audio", variable=removeAudioVal) removeAudio.pack(side = LEFT, padx=3)  newAudio = IntVar() aNewAudio = tk.Checkbutton(mainframe, text="New Audio", variable=newAudio) aNewAudio.pack(side = LEFT, padx=2)  count = 0 # counter uses to create multiple videos  # Open a video file def openVideo():                  fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive         audiofilename = ''                   if(fullfilename != ''):                   global count # access the global count variable                 scale_vid = preferSize.get() # retrieve value from the comno box                 new_size = str(scale_vid)                 dir_path = os.path.dirname(os.path.realpath(fullfilename))                  trim_video = False # set the trim video flag to false                  file_extension = fullfilename.split('.')[-1] # extract the video format from the original video                  os.chdir(dir_path) # change the directory to the original file's directory                  f = '_new_vid_' + new_size  + '.' + file_extension # the new output file name                 f2 = str(count)+f # second video                 f_gif = str(count) + f + '.gif' # create animated gif                  count += 1 # increase video counter for new video                  # create animated image from video                 animi_from_hour = from_t_h_varable.get()                 animi_from_minute = from_t_m_varable.get()                 animi_from_second = from_t_s_varable.get()                  #animi_to_hour = to_t_h_varable.get()                 #animi_to_minute = to_t_m_varable.get()                 animi_to_second = to_t_s_varable.get()                  # if the time areas are not empty and they have a digit then only the animated gif will be created                  if((animi_from_hour != '' and animi_from_hour.isdigit()) and (animi_from_minute != '' and animi_from_minute.isdigit()) and (animi_from_second != '' and animi_from_second.isdigit()) and (animi_to_second != '' and animi_to_second.isdigit())):                         subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize video                         subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=' + str(contrast_variable.get()) +':brightness='+ str(brightness_variable.get()) +':saturation=' + str(saturation_variable.get()) +':gamma='+ str(gamma_variable.get()), '-y', f2]) # adjust the saturation, gamma, contrast and brightness of video                         subprocess.call(['ffmpeg', '-i', f2, '-ss', animi_from_hour + ':' + animi_from_minute + ':' + animi_from_second, '-t',  animi_to_second, '-y', f_gif]) # creating animated gif from starting point to end point                         os.remove(f)                         os.remove(f2)                         return 0                  if(newAudio.get() == 1):                         audiofilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Audio file", "*.wav; *.ogg ")]) # select a new audio file from the hard drive                             # video editing part start here                 noAudio = removeAudioVal.get() # get the checkbox state for audio                   subprocess.call(['ffmpeg', '-stream_loop', str(loop_variable.get()), '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', '-r', str(fr_variable.get()), f]) # resize, speedup and loop the video with ffmpeg                 subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=' + str(contrast_variable.get()) +':brightness='+ str(brightness_variable.get()) +':saturation=' + str(saturation_variable.get()) +':gamma='+ str(gamma_variable.get()), '-y', f2]) # adjust the saturation, gamma, contrast and brightness of video                  # trim video starting point and end point                 trim_from_hour = trim_from_t_h_varable.get()                 trim_from_minute = trim_from_t_m_varable.get()                 trim_from_second = trim_from_t_s_varable.get()                  trim_to_hour = trim_to_t_h_varable.get()                 trim_to_minute = trim_to_t_m_varable.get()                 trim_to_second = trim_to_t_s_varable.get()                  # if the time areas are not empty and they have a digit then trim the video                  if((trim_from_hour != '' and trim_from_hour.isdigit()) and (trim_from_minute != '' and trim_from_minute.isdigit()) and (trim_from_second != '' and trim_from_second.isdigit()) and (trim_to_second != '' and trim_to_second.isdigit()) and (trim_to_minute != '' and trim_to_minute.isdigit()) and (trim_to_hour != '' and trim_to_hour.isdigit())):                         subprocess.call(['ffmpeg', '-i', f2, '-ss', trim_from_hour + ':' + trim_from_minute + ':' + trim_from_second, '-t',  trim_to_hour + ':' + trim_to_minute + ':' + trim_to_second, '-y', '-c:v', 'copy', '-c:a', 'copy', f]) # trim the video from start to end point                         trim_video = True                  if(noAudio == 1 and trim_video == True):                         subprocess.call(['ffmpeg', '-i', f, '-c', 'copy', '-y', '-an', f2]) # remove audio from the original video                                          elif(noAudio == 1 and trim_video == False):                         subprocess.call(['ffmpeg', '-i', f2, '-c', 'copy', '-y', '-an', f]) # remove audio from the original video                                 if(audiofilename != '' and noAudio == 1 and newAudio.get() == 1 and trim_video == False):                         subprocess.call(['ffmpeg', '-i', f, '-i', audiofilename, '-shortest', '-c:v', 'copy', '-b:a', '256k', '-y', f2]) # add audio to the original video, trim either the audio or video depends on which one is longer                 elif(audiofilename != '' and noAudio == 1 and newAudio.get() == 1 and trim_video == True):                         subprocess.call(['ffmpeg', '-i', f2, '-i', audiofilename, '-shortest', '-c:v', 'copy',  '-b:a', '256k', '-y', f]) # add audio to the original video, trim either the audio or video depends on which one is longer                  f3 = f + vid_format.get() # The final video format                  if(f3.split('.')[-1] != f2.split('.')[-1] and trim_video == True and noAudio == 1 and newAudio.get() == 1 and audiofilename != ''):                         subprocess.call(['ffmpeg', '-i', f, '-y', f3]) # converting the video with ffmpeg                         os.remove(f2) # remove two videos                         os.remove(f)                 elif(f3.split('.')[-1] != f2.split('.')[-1] and trim_video == False and noAudio == 1 and newAudio.get() == 1 and audiofilename != ''):                         subprocess.call(['ffmpeg', '-i', f2, '-y', f3]) # converting the video with ffmpeg                         os.remove(f2) # remove two videos                         os.remove(f)                 elif(f3.split('.')[-1] != f2.split('.')[-1] and trim_video == False and noAudio != 1 and newAudio.get() != 1 and audiofilename == ''):                         subprocess.call(['ffmpeg', '-i', f2, '-y', f3]) # converting the video with ffmpeg                         os.remove(f2) # remove two videos                         os.remove(f)                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == True and noAudio == 1 and audiofilename != ''):                         os.remove(f2) # remove one video                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == True and noAudio != 1):                         os.remove(f2) # remove one video                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == False and noAudio != 1):                         os.remove(f) # remove one video                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == False and noAudio == 1):                         os.remove(f2) # remove one video                 else:                         os.remove(f) # remove one video                  trim_video = False # reset the trim video flag to false                  action_vid = tk.Button(buttonFrame, text="Open Video", command=openVideo) action_vid.pack(fill=X)  win.mainloop() 

The above program has included these two features, 1) Allows the user to trim the video from one point to another. 2) Includes the equalizer feature for the image as well as video. Below is one of the animated image which has been created with the above program.

We have one more thing needs to deal with which is the thread issue then we will be able to enjoy this application online together!

Planet Python

Digital Nomad: A Day in the Life

a day in the life of a digital nomad

I am finishing breakfast while my Taxi app shows that the taxi driver will be in front of my house in five minutes. One last sip of coffee and off we go…

This is how most of my days start when I am travelling to one of my exciting client gigs or trainings around Germany, Switzerland, Austria and sometimes the BeNeLux countries. That excitement of “Where am I going next?” and “What will this day bring?” never drops because each day in the life of a “digital nomad” is different, and no two days are the same.

The Perks of a Remote Life

I chose to live in Bonn, Germany, two years ago, right before I started working with InterWorks. I did so because most of my friends live here. and Bonn, the “little brother” and neighbour city of Cologne (one of the major and most exciting German cities), is a great place to live and surrounded by beautiful nature.

This is also one of the biggest advantages of being a digital nomad: you can choose where you want to live. The only requirement for your home base is a solid Internet connection, as well as some kind of mobility that gets you to all the places where you are working. This is why my team is spread out over Germany and the Netherlands (Berlin, Frankfurt, Düsseldorf, Nuremberg, Amsterdam and Amersfoort).

Planes, Trains and Automobiles

Once I sit in the taxi that takes me to the train station, I check the news and the few emails I’ve received. As a company that works mostly remotely, we have spent a lot of time and effort figuring out the best, least stressful way to work remotely, which is why we use tools like Slack, Zoom and Salesforce to organise our communication. This saves plenty of time and emails (there is an interesting Washington Post Article on how much time we spend on email).

Once I arrive at the train station, I board the high-speed ICE train. Another great thing about being a digital nomad, or travelling consultant, is that I can mostly choose how I get from A to B as long as it is reasonable. For my part, I love travelling by train. It is most often cheaper than flying, you travel with zero emissions, and you have enough space and good enough Wi-Fi to work comfortably or watch a TV show on the onboard entertainment system.

Digital Nomad remote commute

Above: My mobile desk at 300km/h or 190mph

From Bonn, it takes about 2-3 hours by train to get to places like Frankfurt, Stuttgart or Amsterdam or by plane to places like Berlin, Vienna, Zurich or Munich. As you can imagine, I travel a lot, but there are lots of little aspects that can make travelling more bearable: knowing how to combine modes of transport to reduce waiting times; knowing your favourite hotel (chains); owning noise-cancelling headphones (!); and discovering your favourite restaurants all over the place.

At the Client Site

Having arrived on the client site, I start off with a face-to-face meeting with the stakeholders of our project to discuss which challenge we want to tackle next. Depending on the client, during such a meeting, I am working with cloud-based tools like Trello, Jira, Smartsheets or Google Drive to manage projects, organise our work and track progress.

The cloud has really made working remotely possible and easy. You can take your work wherever you want as long as there is an Internet connection. Furthermore, you are not limited to office spaces and network cables but can work from a café, airport or even some village in the Alps.

Back to my day … After that first meeting with the project stakeholders, I sit together with the analyst to start developing ideas, formulating questions and creating dashboards. How this is done and how long it takes depends, of course, on the project scope, client size and availability of data and people. Hence, this process can range from a few hours to a couple of days.

Connecting with My Team Virtually

In the afternoon, I have my weekly one-on-one meeting with my team lead, which is done via the video conferencing app Zoom. Of course, it is different to talk to someone in person because you make real eye contact, and you don’t just see a small portion of that person’s surroundings. However, you do get used to having these meetings on your screen. Also, options like screensharing facilitate showing one’s work to someone else.

My team lead and I talk about how our projects are going, if there are any difficulties or issues, and we are brainstorming about the team Hackathon we have planned for our next team meeting. Although I don’t see my team lead and colleagues in person that often, we have plenty of opportunities to get together, like our quarterly team meetings, the Winter Conference and many more events.

Christkindl Market in Dusseldorf

Above: Our team meeting at the Christkindl Market in Düsseldorf

After some more time with the client, at around 17.00h, I pack up my stuff and leave for the train station to return home. Depending on how far away from home I am and how many consecutive days I am travelling, I often stay in a hotel, but today I am taking the train back home. I use the time on the train to respond to some Slack messages I received during the day and listen to my favourite podcast. Staying at a hotel can be very nice as well, as there is so much to do in most of the places I visit, whether it be going for dinner with a colleague or friend, going for a hike, run or swim, or attending a concert.

I arrive back home for dinner with my friends and am already looking forward to what tomorrow will bring!

Black Forest in Germany

Above: There are always things to do in the evening—even in remote places like the Black Forest in Germany

The post Digital Nomad: A Day in the Life appeared first on InterWorks.

InterWorks

Tableau Class Notes: To Aggregate or Not to Aggregate? Part 2

clean aggregated data in Tableau

As a follow-up to my prior blog post, I wanted to explore an additional use case for aggregation in Tableau: joining together data that might be at different date levels. It’s the same premise—you have two data sources at two levels of detail (e.g. row-level transactions vs. regional goals; employee-level vs. team-level), but the steps to aggregate that data are slightly different.

In this example, we have weekly profit data, as well as daily sales data. Our end goal is to return three columns with our daily sales rolled up to the weekly level so that we can compare it against our profits:

weekly and daily profit data in Tableau

Since one data source is at the weekly level, and one is at the daily level, adding an aggregation will be necessary to return data at the correct level.

When Aggregation is Necessary

A rule of thumb for aggregation: your data sources must be aggregated so that they match the data source with the highest level of aggregation. In our example, we have our data split out by week and by day. We couldn’t reliably break our weekly data down to the daily level since we don’t know what days each profit occurred on, so we should aggregate our daily data to the weekly level.

Once we add an Aggregate step, we need to specify what fields we are grouping by and what we are aggregating:

grouped and aggregated fields in Tableau

Click on the word Group next to your data type icon to change your “Group By” level to “Week Start” so it matches our Weekly target data. If you want to read more about how Tableau treats our different options, check out this video from my colleague, Katie! Next, we can add a Clean step and join our two data sources together on our two date fields to get our final data source:

clean aggregated data in Tableau

Hopefully this clears up any questions you have about how to tackle situations where you might have data at two different date levels. Cleaning it up in Prep before you bring it into Tableau Desktop will make your life infinitely easier by minimizing the need for calculations. Thanks to Kent Sloan for assistance on this blog, and thanks for reading!

The post Tableau Class Notes: To Aggregate or Not to Aggregate? Part 2 appeared first on InterWorks.

InterWorks

Continuum Analytics Blog: RPM and Debian Repositories for Miniconda

Conda, the package manager from Anaconda, is now available as either a RedHat RPM or as a Debian package. The packages are the equivalent to the Miniconda installer which only contains conda and its dependencies.  You can use yum or apt-get to install, uninstall and manage conda on your system. To install conda follow the …
Read more →

The post RPM and Debian Repositories for Miniconda appeared first on Anaconda.

Planet Python