Advance with Assist: Editing Custom Fields in Tableau-Published Data Sources

Advance with Assist: Editing Custom Fields in Tableau-Published Data Sources

Question: I have new records in my Excel data that were not there when I created my custom groups. Tableau Desktop isn’t letting me update the group anymore. How can I make these changes so that they take effect across all my workbooks?

This question could come up for a variety of different functionalities that Tableau has for self-service. In this example, we are discussing groups, but calculated fields, bins, parameter lists from fields, etc. are all related and can follow a similar path of solution like we will describe below.

Creating Custom Groups in Tableau Desktop

When creating groups or other self-service fields, we must remember that we are adding to the metadata of the original data source we connected to. In this example, we have a user creating custom groups within Tableau Desktop to Excel:

creating custom groups in Tableau

This is a manual process, which utilizes the data that is currently present to give you the members for grouping:

creating custom groups in Tableau

Once this user created this group, they used the Tableau repository on the Tableau server to publish out to their users. They then have this group for their analysis and self-service reporting needs.

Here’s where the issue presents itself in the user’s question. They can no longer edit the group. When they try to edit, they now only see an Edit copy menu instead of edit group:

editing a group of data in Tableau

The reason for this is that once you have a data source that is published to the Tableau server, you must view the connection like you would a database. The extract in Tableau has created that custom field, and it exists in the data like it was there from the start:

Tableau extract and custom field

Editing Your Data Group in Tableau

If you need to make changes to this group, be sure you treat the published data source like you would a database. You have to change the original connection before you published it, similar to making changes in ETL that would then show up in the data table.

Since this user was the owner of the dataset, they simply right-clicked the published connection and created a local copy of the connection:

create a local copy in Tableau

Now with the Local Copy connection, they can edit the group again and republish. This will overwrite the original published version, so everyone can then see the updates in their connected workbooks:

edit a local copy in Tableau

Note: Make sure you see the message that the data source name already exists when publishing back out. It’s the only way others will see your updates.

The post Advance with Assist: Editing Custom Fields in Tableau-Published Data Sources appeared first on InterWorks.

InterWorks

codingdirectional: Create a thread for the video editing application

Hello and welcome back to this final part of the video editing project. After this chapter, you will be good enough to continue to develop this application by yourself and hopefully, you can come out with a better idea of how to further develop this video editing application.

In this chapter I have created a thread for this video editing application which will separate the application into two part, the user interface part and the thread part. The user interface part will take care of the user’s click event and the user’s input and the threaded part will process the user input.

There is nothing new in this chapter besides separate the application into two part. The first part is the user interface part.

 from tkinter import * from tkinter import filedialog import tkinter.ttk as tk from tkinter import messagebox from NewVid import NewVid import webbrowser  win = Tk() # Create tk instance win.title("NeW Vid") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change background color  mainframe = Frame(win) # create a frame mainframe.pack()  eqFrame = Frame(win) # create eq frame eqFrame.pack(side = TOP, fill=X)  animatedFrame = Frame(win) # create animated frame animatedFrame.pack(side = TOP, fill=X)  trimFrame = Frame(win) # create trim frame trimFrame.pack(side = TOP, fill=X)  buttonFrame = Frame(win) # create a button frame buttonFrame.pack(side = BOTTOM, fill=X, pady = 6)  # Create a label and scale box for eq contrast_variable = DoubleVar() contrast = Scale(eqFrame, from_=float(-2.00), to=float(2.00), orient=HORIZONTAL, label="CONTRAST", digits=3, resolution=0.01, variable=contrast_variable) contrast.set(1) contrast.pack(side = LEFT) brightness_variable = DoubleVar() brightness = Scale(eqFrame, from_=float(-1.00), to=float(1.00), orient=HORIZONTAL, label="BRIGHTNESS", digits=3, resolution=0.01, variable=brightness_variable) brightness.pack(side = LEFT) saturation_variable = DoubleVar() saturation = Scale(eqFrame, from_=float(0.00), to=float(3.00), orient=HORIZONTAL, label="SATURATION", digits=3, resolution=0.01, variable=saturation_variable) saturation.set(1) saturation.pack(side = LEFT) gamma_variable = DoubleVar() gamma = Scale(eqFrame, from_=float(0.10), to=float(10.00), orient=HORIZONTAL, label="GAMMA", digits=4, resolution=0.01, variable=gamma_variable) gamma.set(1) gamma.pack(side = LEFT) loop_variable = DoubleVar() loop = Scale(eqFrame, from_=float(0), to=float(10), orient=HORIZONTAL, label="REPEAT", digits=2, resolution=1, variable=loop_variable) loop.pack(side = LEFT) fr_variable = DoubleVar() fr = Scale(eqFrame, from_=float(9), to=float(60), orient=HORIZONTAL, label="FPS", digits=2, resolution=1, variable=fr_variable) fr.set(24) fr.pack(side = LEFT)  #create animated gif anime = Label(animatedFrame, text="Create Animated Image from Video   ") anime.pack(side = TOP) anime.pack(side = LEFT)  from_ = Label(animatedFrame, text="Start From (hour : minute : second)  ") from_.pack(side = BOTTOM) from_.pack(side = LEFT) from_t_h_varable = StringVar() from_t_h = Entry(animatedFrame, width=3, textvariable=from_t_h_varable) from_t_h.pack(side=BOTTOM) from_t_h.pack(side=LEFT) from_m = Label(animatedFrame, text=" : ") from_m.pack(side = BOTTOM) from_m.pack(side = LEFT) from_t_m_varable = StringVar() from_t_m = Entry(animatedFrame, width=3,textvariable=from_t_m_varable) from_t_m.pack(side=BOTTOM) from_t_m.pack(side=LEFT) from_s = Label(animatedFrame, text=" : ") from_s.pack(side = BOTTOM) from_s.pack(side = LEFT) from_t_s_varable = StringVar() from_t_s = Entry(animatedFrame, width=3,textvariable=from_t_s_varable) from_t_s.pack(side=BOTTOM) from_t_s.pack(side=LEFT)  to_ = Label(animatedFrame, text="  To (in second)  ") to_.pack(side = BOTTOM) to_.pack(side = LEFT) to_t_s_varable = StringVar() to_t_s = Entry(animatedFrame, width=3,textvariable=to_t_s_varable) to_t_s.pack(side=BOTTOM) to_t_s.pack(side=LEFT)   #trim video trim = Label(trimFrame, text="Trim Video   ") trim.pack(side = TOP) trim.pack(side = LEFT)  trim_from_ = Label(trimFrame, text="Start From (hour : minute : second)  ") trim_from_.pack(side = BOTTOM) trim_from_.pack(side = LEFT) trim_from_t_h_varable = StringVar() trim_from_t_h = Entry(trimFrame, width=3, textvariable=trim_from_t_h_varable) trim_from_t_h.pack(side=BOTTOM) trim_from_t_h.pack(side=LEFT) trim_from_m = Label(trimFrame, text=" : ") trim_from_m.pack(side = BOTTOM) trim_from_m.pack(side = LEFT) trim_from_t_m_varable = StringVar() trim_from_t_m = Entry(trimFrame, width=3,textvariable=trim_from_t_m_varable) trim_from_t_m.pack(side=BOTTOM) trim_from_t_m.pack(side=LEFT) trim_from_s = Label(trimFrame, text=" : ") trim_from_s.pack(side = BOTTOM) trim_from_s.pack(side = LEFT) trim_from_t_s_varable = StringVar() trim_from_t_s = Entry(trimFrame, width=3,textvariable=trim_from_t_s_varable) trim_from_t_s.pack(side=BOTTOM) trim_from_t_s.pack(side=LEFT)  trim_to_ = Label(trimFrame, text="  To (in second)  ") trim_to_.pack(side = BOTTOM) trim_to_.pack(side = LEFT) trim_to_t_h_varable = StringVar() trim_to_t_h = Entry(trimFrame, width=3,textvariable=trim_to_t_h_varable) trim_to_t_h.pack(side=BOTTOM) trim_to_t_h.pack(side=LEFT) trim_to_m = Label(trimFrame, text=" : ") trim_to_m.pack(side = BOTTOM) trim_to_m.pack(side = LEFT) trim_to_t_m_varable = StringVar() trim_to_t_m = Entry(trimFrame, width=3,textvariable=trim_to_t_m_varable) trim_to_t_m.pack(side=BOTTOM) trim_to_t_m.pack(side=LEFT) trim_to_s = Label(trimFrame, text=" : ") trim_to_s.pack(side = BOTTOM) trim_to_s.pack(side = LEFT) trim_to_t_s_varable = StringVar() trim_to_t_s = Entry(trimFrame, width=3,textvariable=trim_to_t_s_varable) trim_to_t_s.pack(side=BOTTOM) trim_to_t_s.pack(side=LEFT)  # Create a combo box vid_size = StringVar() # create a string variable preferSize = tk.Combobox(mainframe, textvariable=vid_size)  preferSize['values'] = (1920, 1280, 854, 640) # video width in pixels preferSize.current(0) # select item one  preferSize.pack(side = LEFT)  # Create a combo box vid_format = StringVar() # create a string variable preferFormat = tk.Combobox(mainframe, textvariable=vid_format)  preferFormat['values'] = ('.mp4', '.webm', '.avi', '.wmv', '.mpg', '.ogv') # video format preferFormat.current(0) # select item one  preferFormat.pack(side = LEFT)  removeAudioVal = IntVar() removeAudio = tk.Checkbutton(mainframe, text="Remove Audio", variable=removeAudioVal) removeAudio.pack(side = LEFT, padx=3)  newAudio = IntVar() aNewAudio = tk.Checkbutton(mainframe, text="New Audio", variable=newAudio) aNewAudio.pack(side = LEFT, padx=2)  count = 0 # counter uses to create multiple videos  btn_text = StringVar() # button text  # Open a video file def openVideo():                  audiofilename = ''         fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive         if(newAudio.get() == 1):                 audiofilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Audio file", "*.wav; *.ogg ")]) # select a new audio file from the hard drive         global count # access the global count variable                   if(fullfilename != ''):                   scale_vid = preferSize.get() # retrieve value from the comno box                 new_size = str(scale_vid)                 file_extension = fullfilename.split('.')[-1] # extract the video format from the original video                 f = '_new_vid_' + new_size  + '.' + file_extension # the new output file name                 f2 = str(count)+f # second video                 f_gif = str(count) + f + '.gif' # create animated gif                 count += 1 # increase video counter for new video                  # create animated image from video                 animi_from_hour = from_t_h_varable.get()                 animi_from_minute = from_t_m_varable.get()                 animi_from_second = from_t_s_varable.get()                 animi_to_second = to_t_s_varable.get()                  # video editing part start here                 noAudio = removeAudioVal.get() # get the checkbox state for audio                   # trim video starting point and end point                 trim_from_hour = trim_from_t_h_varable.get()                 trim_from_minute = trim_from_t_m_varable.get()                 trim_from_second = trim_from_t_s_varable.get()                  trim_to_hour = trim_to_t_h_varable.get()                 trim_to_minute = trim_to_t_m_varable.get()                 trim_to_second = trim_to_t_s_varable.get()                  f3 = f + vid_format.get() # The final video format                  contrast_value = str(contrast_variable.get())                  brightness_value = str(brightness_variable.get())                  saturation_value = str(saturation_variable.get())                 gamma_value =  str(gamma_variable.get())                 loop_value = str(loop_variable.get())                 frame_rate_value = str(fr_variable.get())                  try:                         new_vide_thread = NewVid(contrast_value, brightness_value, saturation_value, gamma_value, loop_value, frame_rate_value, new_size, noAudio, file_extension, f, f2, f3, f_gif, audiofilename, fullfilename, animi_from_hour, animi_from_minute, animi_from_second, animi_to_second, trim_from_hour, trim_from_minute, trim_from_second, trim_to_hour, trim_to_minute, trim_to_second)                         new_vide_thread.start()                         new_vide_thread.join()                 except:                         messagebox.showinfo("Error", "You need to install FFmpeg before using this program!")                         webbrowser.open('https://www.ffmpeg.org/')          else:                 messagebox.showinfo("Error", "You need to select a video file!")  action_vid = tk.Button(buttonFrame, command=openVideo, text="Select Video") action_vid.pack(fill=X)  win.mainloop() 

The second part is the thread part,

 import os import subprocess import threading  class NewVid(threading.Thread):                      def __init__(self, contrast_value, brightness_value, saturation_value, gamma_value, loop_value, frame_rate_value, new_size, noAudio, file_extension, f, f2, f3, f_gif, audiofilename, fullfilename, animi_from_hour, animi_from_minute, animi_from_second, animi_to_second, trim_from_hour, trim_from_minute, trim_from_second, trim_to_hour, trim_to_minute, trim_to_second):         threading.Thread.__init__(self)         self.contrast_value = contrast_value         self.brightness_value = brightness_value         self.saturation_value = saturation_value         self.gamma_value = gamma_value         self.loop_value = loop_value         self.frame_rate_value = frame_rate_value         self.new_size =new_size         self.noAudio = noAudio         self.file_extension = file_extension         self.f = f         self.f2 = f2         self.f3 = f3         self.f_gif = f_gif         self.audiofilename = audiofilename         self.fullfilename = fullfilename         self.animi_from_hour = animi_from_hour         self.animi_from_minute = animi_from_hour         self.animi_from_second = animi_from_second         self.animi_to_second = animi_to_second         self.trim_from_hour = trim_from_hour         self.trim_from_minute = trim_from_minute         self.trim_from_second = trim_from_second         self.trim_to_hour = trim_to_hour         self.trim_to_minute = trim_to_minute         self.trim_to_second = trim_to_second      # Processing video file     def run(self):          dir_path = os.path.dirname(os.path.realpath(self.fullfilename))          self.trim_video = False # set the trim video flag to false         os.chdir(dir_path) # change the directory to the original file's directory          # if the time areas are not empty and they have a digit then only the animated gif will be created          if((self.animi_from_hour != '' and self.animi_from_hour.isdigit()) and (self.animi_from_minute != '' and self.animi_from_minute.isdigit()) and (self.animi_from_second != '' and self.animi_from_second.isdigit()) and (self.animi_to_second != '' and self.animi_to_second.isdigit())):             subprocess.call(['ffmpeg', '-i', self.fullfilename, '-vf', 'scale=' + self.new_size + ':-1', '-y', self.f]) # resize video             subprocess.call(['ffmpeg', '-i', self.f, '-vf', 'eq=contrast=' + self.contrast_value +':brightness='+ self.brightness_value +':saturation=' + self.saturation_value +':gamma=' + self.gamma_value, '-y', self.f2]) # adjust the saturation, gamma, contrast and brightness of video             subprocess.call(['ffmpeg', '-i', self.f2, '-ss', self.animi_from_hour + ':' + self.animi_from_minute + ':' + self.animi_from_second, '-t',  self.animi_to_second, '-y', self.f_gif]) # creating animated gif from starting point to end point             os.remove(self.f)             os.remove(self.f2)             return 0              # video editing part start here         subprocess.call(['ffmpeg', '-stream_loop', (self.loop_value), '-i', self.fullfilename, '-vf', 'scale=' + self.new_size + ':-1', '-y', '-r', self.frame_rate_value, self.f]) # resize, speedup and loop the video with ffmpeg         subprocess.call(['ffmpeg', '-i', self.f, '-vf', 'eq=contrast=' + self.contrast_value +':brightness='+ self.brightness_value +':saturation=' + self.saturation_value +':gamma=' + self.gamma_value, '-y', self.f2]) # adjust the saturation, gamma, contrast and brightness of video                 # if the time areas are not empty and they have a digit then trim the video          if((self.trim_from_hour != '' and self.trim_from_hour.isdigit()) and (self.trim_from_minute != '' and self.trim_from_minute.isdigit()) and (self.trim_from_second != '' and self.trim_from_second.isdigit()) and (self.trim_to_second != '' and self.trim_to_second.isdigit()) and (self.trim_to_minute != '' and self.trim_to_minute.isdigit()) and (self.trim_to_hour != '' and self.trim_to_hour.isdigit())):             subprocess.call(['ffmpeg', '-i', self.f2, '-ss', self.trim_from_hour + ':' + self.trim_from_minute + ':' + self.trim_from_second, '-t',  self.trim_to_hour + ':' + self.trim_to_minute + ':' + self.trim_to_second, '-y', '-c:v', 'copy', '-c:a', 'copy', self.f]) # trim the video from start to end point             self.trim_video = True          if(self.noAudio == 1 and self.trim_video == True):             subprocess.call(['ffmpeg', '-i', self.f, '-c', 'copy', '-y', '-an', self.f2]) # remove audio from the original video                                  elif(self.noAudio == 1 and self.trim_video == False):             subprocess.call(['ffmpeg', '-i', self.f2, '-c', 'copy', '-y', '-an', self.f]) # remove audio from the original video                         if(self.audiofilename != '' and self.noAudio == 1 and self.trim_video == False):             subprocess.call(['ffmpeg', '-i', self.f, '-i', self.audiofilename, '-shortest', '-c:v', 'copy', '-b:a', '256k', '-y', self.f2]) # add audio to the original video, trim either the audio or video depends on which one is longer         elif(self.audiofilename != '' and self.noAudio == 1 and self.trim_video == True):             subprocess.call(['ffmpeg', '-i', self.f2, '-i', self.audiofilename, '-shortest', '-c:v', 'copy',  '-b:a', '256k', '-y', self.f]) # add audio to the original video, trim either the audio or video depends on which one is longer          if(self.f3.split('.')[-1] != self.f2.split('.')[-1] and self.trim_video == True and self.noAudio == 1 and self.audiofilename != ''):             subprocess.call(['ffmpeg', '-i', self.f, '-y', self.f3]) # converting the video with ffmpeg             os.remove(self.f2) # remove two videos             os.remove(self.f)         elif(self.f3.split('.')[-1] != self.f2.split('.')[-1] and self.trim_video == False and self.noAudio == 1 and self.audiofilename != ''):             subprocess.call(['ffmpeg', '-i', self.f2, '-y', self.f3]) # converting the video with ffmpeg             os.remove(self.f2) # remove two videos             os.remove(self.f)         elif(self.f3.split('.')[-1] != self.f2.split('.')[-1] and self.trim_video == False and self.noAudio != 1 and self.audiofilename == ''):             subprocess.call(['ffmpeg', '-i', self.f2, '-y', self.f3]) # converting the video with ffmpeg             os.remove(self.f2) # remove two videos             os.remove(self.f)         elif(self.f3.split('.')[-1] == self.f2.split('.')[-1] and self.trim_video == True and self.noAudio == 1 and self.audiofilename != ''):             os.remove(self.f2) # remove one video         elif(self.f3.split('.')[-1] == self.f2.split('.')[-1] and self.trim_video == True and self.noAudio != 1):             os.remove(self.f2) # remove one video         elif(self.f3.split('.')[-1] == self.f2.split('.')[-1] and self.trim_video == False and self.noAudio != 1):             os.remove(self.f) # remove one video         elif(self.f3.split('.')[-1] == self.f2.split('.')[-1] and self.trim_video == False and self.noAudio == 1):             os.remove(self.f2) # remove one video         else:             os.remove(self.f) # remove one video          self.trim_video = False # reset the trim video flag to false 

Well, now is your turn to edit this application then use it to edit video. How about creating a progress bar which tells the user how much time he still needs to wait for the software to finish editing the video?

Planet Python

codingdirectional: The final user interface of this video editing application

Sorry for not posting anything yesterday as I had suffered from the flu and today although I still feel a little bit tired and sick I need to at least post a post on this site. After a day of hard work I have finally finished to tidy up the user interface of this video editing application. I wish there is no bug in the logic part of this revised program. If you find any bug in the below program do let me know about it.

 from tkinter import * from tkinter import filedialog import os import subprocess import tkinter.ttk as tk  win = Tk() # Create tk instance win.title("NeW Vid") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change background color  mainframe = Frame(win) # create a frame mainframe.pack()  eqFrame = Frame(win) # create eq frame eqFrame.pack(side = TOP, fill=X)  animatedFrame = Frame(win) # create animated frame animatedFrame.pack(side = TOP, fill=X)  trimFrame = Frame(win) # create trim frame trimFrame.pack(side = TOP, fill=X)  buttonFrame = Frame(win) # create a button frame buttonFrame.pack(side = BOTTOM, fill=X, pady = 6)  # Create a label and scale box for eq contrast_variable = DoubleVar() contrast = Scale(eqFrame, from_=float(-2.00), to=float(2.00), orient=HORIZONTAL, label="CONTRAST", digits=3, resolution=0.01, variable=contrast_variable) contrast.set(1) contrast.pack(side = LEFT) brightness_variable = DoubleVar() brightness = Scale(eqFrame, from_=float(-1.00), to=float(1.00), orient=HORIZONTAL, label="BRIGHTNESS", digits=3, resolution=0.01, variable=brightness_variable) brightness.pack(side = LEFT) saturation_variable = DoubleVar() saturation = Scale(eqFrame, from_=float(0.00), to=float(3.00), orient=HORIZONTAL, label="SATURATION", digits=3, resolution=0.01, variable=saturation_variable) saturation.set(1) saturation.pack(side = LEFT) gamma_variable = DoubleVar() gamma = Scale(eqFrame, from_=float(0.10), to=float(10.00), orient=HORIZONTAL, label="GAMMA", digits=4, resolution=0.01, variable=gamma_variable) gamma.set(1) gamma.pack(side = LEFT) loop_variable = DoubleVar() loop = Scale(eqFrame, from_=float(0), to=float(10), orient=HORIZONTAL, label="REPEAT", digits=2, resolution=1, variable=loop_variable) loop.pack(side = LEFT) fr_variable = DoubleVar() fr = Scale(eqFrame, from_=float(9), to=float(60), orient=HORIZONTAL, label="FPS", digits=2, resolution=1, variable=fr_variable) fr.set(24) fr.pack(side = LEFT)  #create animated gif anime = Label(animatedFrame, text="Create Animated Image from Video   ") anime.pack(side = TOP) anime.pack(side = LEFT)  from_ = Label(animatedFrame, text="Start From (hour : minute : second)  ") from_.pack(side = BOTTOM) from_.pack(side = LEFT) from_t_h_varable = StringVar() from_t_h = Entry(animatedFrame, width=3, textvariable=from_t_h_varable) from_t_h.pack(side=BOTTOM) from_t_h.pack(side=LEFT) from_m = Label(animatedFrame, text=" : ") from_m.pack(side = BOTTOM) from_m.pack(side = LEFT) from_t_m_varable = StringVar() from_t_m = Entry(animatedFrame, width=3,textvariable=from_t_m_varable) from_t_m.pack(side=BOTTOM) from_t_m.pack(side=LEFT) from_s = Label(animatedFrame, text=" : ") from_s.pack(side = BOTTOM) from_s.pack(side = LEFT) from_t_s_varable = StringVar() from_t_s = Entry(animatedFrame, width=3,textvariable=from_t_s_varable) from_t_s.pack(side=BOTTOM) from_t_s.pack(side=LEFT)  to_ = Label(animatedFrame, text="  To (in second)  ") to_.pack(side = BOTTOM) to_.pack(side = LEFT) #to_t_h_varable = StringVar() #to_t_h = Entry(animatedFrame, width=3,textvariable=to_t_h_varable) #to_t_h.pack(side=BOTTOM) #to_t_h.pack(side=LEFT) #to_m = Label(animatedFrame, text=" : ") #to_m.pack(side = BOTTOM) #to_m.pack(side = LEFT) #to_t_m_varable = StringVar() #to_t_m = Entry(animatedFrame, width=3,textvariable=to_t_m_varable) #to_t_m.pack(side=BOTTOM) #to_t_m.pack(side=LEFT) #to_s = Label(animatedFrame, text=" : ") #to_s.pack(side = BOTTOM) #to_s.pack(side = LEFT) to_t_s_varable = StringVar() to_t_s = Entry(animatedFrame, width=3,textvariable=to_t_s_varable) to_t_s.pack(side=BOTTOM) to_t_s.pack(side=LEFT)   #trim video trim = Label(trimFrame, text="Trim Video   ") trim.pack(side = TOP) trim.pack(side = LEFT)  trim_from_ = Label(trimFrame, text="Start From (hour : minute : second)  ") trim_from_.pack(side = BOTTOM) trim_from_.pack(side = LEFT) trim_from_t_h_varable = StringVar() trim_from_t_h = Entry(trimFrame, width=3, textvariable=trim_from_t_h_varable) trim_from_t_h.pack(side=BOTTOM) trim_from_t_h.pack(side=LEFT) trim_from_m = Label(trimFrame, text=" : ") trim_from_m.pack(side = BOTTOM) trim_from_m.pack(side = LEFT) trim_from_t_m_varable = StringVar() trim_from_t_m = Entry(trimFrame, width=3,textvariable=trim_from_t_m_varable) trim_from_t_m.pack(side=BOTTOM) trim_from_t_m.pack(side=LEFT) trim_from_s = Label(trimFrame, text=" : ") trim_from_s.pack(side = BOTTOM) trim_from_s.pack(side = LEFT) trim_from_t_s_varable = StringVar() trim_from_t_s = Entry(trimFrame, width=3,textvariable=trim_from_t_s_varable) trim_from_t_s.pack(side=BOTTOM) trim_from_t_s.pack(side=LEFT)  trim_to_ = Label(trimFrame, text="  To (in second)  ") trim_to_.pack(side = BOTTOM) trim_to_.pack(side = LEFT) trim_to_t_h_varable = StringVar() trim_to_t_h = Entry(trimFrame, width=3,textvariable=trim_to_t_h_varable) trim_to_t_h.pack(side=BOTTOM) trim_to_t_h.pack(side=LEFT) trim_to_m = Label(trimFrame, text=" : ") trim_to_m.pack(side = BOTTOM) trim_to_m.pack(side = LEFT) trim_to_t_m_varable = StringVar() trim_to_t_m = Entry(trimFrame, width=3,textvariable=trim_to_t_m_varable) trim_to_t_m.pack(side=BOTTOM) trim_to_t_m.pack(side=LEFT) trim_to_s = Label(trimFrame, text=" : ") trim_to_s.pack(side = BOTTOM) trim_to_s.pack(side = LEFT) trim_to_t_s_varable = StringVar() trim_to_t_s = Entry(trimFrame, width=3,textvariable=trim_to_t_s_varable) trim_to_t_s.pack(side=BOTTOM) trim_to_t_s.pack(side=LEFT)  # Create a combo box vid_size = StringVar() # create a string variable preferSize = tk.Combobox(mainframe, textvariable=vid_size)  preferSize['values'] = (1920, 1280, 854, 640) # video width in pixels preferSize.current(0) # select item one  preferSize.pack(side = LEFT)  # Create a combo box vid_format = StringVar() # create a string variable preferFormat = tk.Combobox(mainframe, textvariable=vid_format)  preferFormat['values'] = ('.mp4', '.webm', '.avi', '.wmv', '.mpg', '.ogv') # video format preferFormat.current(0) # select item one  preferFormat.pack(side = LEFT)  removeAudioVal = IntVar() removeAudio = tk.Checkbutton(mainframe, text="Remove Audio", variable=removeAudioVal) removeAudio.pack(side = LEFT, padx=3)  newAudio = IntVar() aNewAudio = tk.Checkbutton(mainframe, text="New Audio", variable=newAudio) aNewAudio.pack(side = LEFT, padx=2)  count = 0 # counter uses to create multiple videos  # Open a video file def openVideo():                  fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive         audiofilename = ''                   if(fullfilename != ''):                   global count # access the global count variable                 scale_vid = preferSize.get() # retrieve value from the comno box                 new_size = str(scale_vid)                 dir_path = os.path.dirname(os.path.realpath(fullfilename))                  trim_video = False # set the trim video flag to false                  file_extension = fullfilename.split('.')[-1] # extract the video format from the original video                  os.chdir(dir_path) # change the directory to the original file's directory                  f = '_new_vid_' + new_size  + '.' + file_extension # the new output file name                 f2 = str(count)+f # second video                 f_gif = str(count) + f + '.gif' # create animated gif                  count += 1 # increase video counter for new video                  # create animated image from video                 animi_from_hour = from_t_h_varable.get()                 animi_from_minute = from_t_m_varable.get()                 animi_from_second = from_t_s_varable.get()                  #animi_to_hour = to_t_h_varable.get()                 #animi_to_minute = to_t_m_varable.get()                 animi_to_second = to_t_s_varable.get()                  # if the time areas are not empty and they have a digit then only the animated gif will be created                  if((animi_from_hour != '' and animi_from_hour.isdigit()) and (animi_from_minute != '' and animi_from_minute.isdigit()) and (animi_from_second != '' and animi_from_second.isdigit()) and (animi_to_second != '' and animi_to_second.isdigit())):                         subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize video                         subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=' + str(contrast_variable.get()) +':brightness='+ str(brightness_variable.get()) +':saturation=' + str(saturation_variable.get()) +':gamma='+ str(gamma_variable.get()), '-y', f2]) # adjust the saturation, gamma, contrast and brightness of video                         subprocess.call(['ffmpeg', '-i', f2, '-ss', animi_from_hour + ':' + animi_from_minute + ':' + animi_from_second, '-t',  animi_to_second, '-y', f_gif]) # creating animated gif from starting point to end point                         os.remove(f)                         os.remove(f2)                         return 0                  if(newAudio.get() == 1):                         audiofilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Audio file", "*.wav; *.ogg ")]) # select a new audio file from the hard drive                             # video editing part start here                 noAudio = removeAudioVal.get() # get the checkbox state for audio                   subprocess.call(['ffmpeg', '-stream_loop', str(loop_variable.get()), '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', '-r', str(fr_variable.get()), f]) # resize, speedup and loop the video with ffmpeg                 subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=' + str(contrast_variable.get()) +':brightness='+ str(brightness_variable.get()) +':saturation=' + str(saturation_variable.get()) +':gamma='+ str(gamma_variable.get()), '-y', f2]) # adjust the saturation, gamma, contrast and brightness of video                  # trim video starting point and end point                 trim_from_hour = trim_from_t_h_varable.get()                 trim_from_minute = trim_from_t_m_varable.get()                 trim_from_second = trim_from_t_s_varable.get()                  trim_to_hour = trim_to_t_h_varable.get()                 trim_to_minute = trim_to_t_m_varable.get()                 trim_to_second = trim_to_t_s_varable.get()                  # if the time areas are not empty and they have a digit then trim the video                  if((trim_from_hour != '' and trim_from_hour.isdigit()) and (trim_from_minute != '' and trim_from_minute.isdigit()) and (trim_from_second != '' and trim_from_second.isdigit()) and (trim_to_second != '' and trim_to_second.isdigit()) and (trim_to_minute != '' and trim_to_minute.isdigit()) and (trim_to_hour != '' and trim_to_hour.isdigit())):                         subprocess.call(['ffmpeg', '-i', f2, '-ss', trim_from_hour + ':' + trim_from_minute + ':' + trim_from_second, '-t',  trim_to_hour + ':' + trim_to_minute + ':' + trim_to_second, '-y', '-c:v', 'copy', '-c:a', 'copy', f]) # trim the video from start to end point                         trim_video = True                  if(noAudio == 1 and trim_video == True):                         subprocess.call(['ffmpeg', '-i', f, '-c', 'copy', '-y', '-an', f2]) # remove audio from the original video                                          elif(noAudio == 1 and trim_video == False):                         subprocess.call(['ffmpeg', '-i', f2, '-c', 'copy', '-y', '-an', f]) # remove audio from the original video                                 if(audiofilename != '' and noAudio == 1 and newAudio.get() == 1 and trim_video == False):                         subprocess.call(['ffmpeg', '-i', f, '-i', audiofilename, '-shortest', '-c:v', 'copy', '-b:a', '256k', '-y', f2]) # add audio to the original video, trim either the audio or video depends on which one is longer                 elif(audiofilename != '' and noAudio == 1 and newAudio.get() == 1 and trim_video == True):                         subprocess.call(['ffmpeg', '-i', f2, '-i', audiofilename, '-shortest', '-c:v', 'copy',  '-b:a', '256k', '-y', f]) # add audio to the original video, trim either the audio or video depends on which one is longer                  f3 = f + vid_format.get() # The final video format                  if(f3.split('.')[-1] != f2.split('.')[-1] and trim_video == True and noAudio == 1 and newAudio.get() == 1 and audiofilename != ''):                         subprocess.call(['ffmpeg', '-i', f, '-y', f3]) # converting the video with ffmpeg                         os.remove(f2) # remove two videos                         os.remove(f)                 elif(f3.split('.')[-1] != f2.split('.')[-1] and trim_video == False and noAudio == 1 and newAudio.get() == 1 and audiofilename != ''):                         subprocess.call(['ffmpeg', '-i', f2, '-y', f3]) # converting the video with ffmpeg                         os.remove(f2) # remove two videos                         os.remove(f)                 elif(f3.split('.')[-1] != f2.split('.')[-1] and trim_video == False and noAudio != 1 and newAudio.get() != 1 and audiofilename == ''):                         subprocess.call(['ffmpeg', '-i', f2, '-y', f3]) # converting the video with ffmpeg                         os.remove(f2) # remove two videos                         os.remove(f)                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == True and noAudio == 1 and audiofilename != ''):                         os.remove(f2) # remove one video                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == True and noAudio != 1):                         os.remove(f2) # remove one video                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == False and noAudio != 1):                         os.remove(f) # remove one video                 elif(f3.split('.')[-1] == f2.split('.')[-1] and trim_video == False and noAudio == 1):                         os.remove(f2) # remove one video                 else:                         os.remove(f) # remove one video                  trim_video = False # reset the trim video flag to false                  action_vid = tk.Button(buttonFrame, text="Open Video", command=openVideo) action_vid.pack(fill=X)  win.mainloop() 

The above program has included these two features, 1) Allows the user to trim the video from one point to another. 2) Includes the equalizer feature for the image as well as video. Below is one of the animated image which has been created with the above program.

We have one more thing needs to deal with which is the thread issue then we will be able to enjoy this application online together!

Planet Python

codingdirectional: Create equalizer feature for video editing program

Today we will continue to work on the user interface of the previous video editing application. The equalizer options for this video application will be located in the second row of this application. We will create another frame to hold all those equalizer options. Each option will consist of a scale box which allows the user to select the value from each equalizer’s option. The default value for each option will be provided as well.

This is the revise UI for the program.

http://islandstropicalman.tumblr.com/post/182155440040/the-video-editing-application

When we just select the above options and then select a video to edit we will get the below outcome.

http://islandstropicalman.tumblr.com/post/182155502874/the-video-created-with-a-video-editing-application

Below is the revise python program for this video editing project.

 from tkinter import * from tkinter import filedialog import os import subprocess import tkinter.ttk as tk  win = Tk() # Create instance win.title("NeW Vid") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change background color  mainframe = Frame(win) # create a frame mainframe.pack()  eqFrame = Frame(win) # create eq frame eqFrame.pack(side = TOP, fill=X)  buttonFrame = Frame(win) # create a button frame buttonFrame.pack(side = BOTTOM, fill=X)  # Create a label and scale box for eq contrast_variable = DoubleVar() contrast = Scale(eqFrame, from_=float(-2.00), to=float(2.00), orient=HORIZONTAL, label="CONTRAST", digits=3, resolution=0.01, variable=contrast_variable) contrast.set(1) contrast.pack(side = LEFT) brightness_variable = DoubleVar() brightness = Scale(eqFrame, from_=float(-1.00), to=float(1.00), orient=HORIZONTAL, label="BRIGHTNESS", digits=3, resolution=0.01, variable=brightness_variable) brightness.pack(side = LEFT) saturation_variable = DoubleVar() saturation = Scale(eqFrame, from_=float(0.00), to=float(3.00), orient=HORIZONTAL, label="SATURATION", digits=3, resolution=0.01, variable=saturation_variable) saturation.set(1) saturation.pack(side = LEFT) gamma_variable = DoubleVar() gamma = Scale(eqFrame, from_=float(0.10), to=float(10.00), orient=HORIZONTAL, label="GAMMA", digits=4, resolution=0.01, variable=gamma_variable) gamma.set(1) gamma.pack(side = LEFT)  # Create a combo box vid_size = StringVar() # create a string variable preferSize = tk.Combobox(mainframe, textvariable=vid_size)  preferSize['values'] = (1920, 1280, 854, 640) # video width in pixels preferSize.grid(column=0, row=1) # the position of combo box preferSize.current(0) # select item one  preferSize.pack(side = LEFT, expand = TRUE)  removeAudioVal = IntVar() removeAudio = tk.Checkbutton(mainframe, text="Remove Audio", variable=removeAudioVal) removeAudio.pack(side = LEFT, padx=3)  newAudio = IntVar() aNewAudio = tk.Checkbutton(mainframe, text="New Audio", variable=newAudio) aNewAudio.pack(side = LEFT, padx=2)  # Open a video file def openVideo():                  fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive         audiofilename = ''         if(newAudio.get() == 1):                 audiofilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Audio file", "*.wav; *.ogg ")]) # select a new audio file from the hard drive                          if(fullfilename != ''):                   scale_vid = preferSize.get() # retrieve value from the comno box                 new_size = str(scale_vid)                 dir_path = os.path.dirname(os.path.realpath(fullfilename))                 os.chdir(dir_path)                 f = new_size  + '.mp4' # the new output file name/format                 f2 = f + '.mp4' # mp4 video                  noAudio = removeAudioVal.get() # get the checkbox state for audio                   #subprocess.call(['ffmpeg', '-stream_loop', '2', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize and loop the video with ffmpeg                 #subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', '-r', '24', f]) # resize and speed up the video with ffmpeg                                 #subprocess.call(['ffmpeg', '-i', f, '-ss', '00:02:30', '-y', f2]) # create animated gif starting from 2 minutes and 30 seconds to the end                 subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize the video with ffmpeg                  if(noAudio == 1):                         subprocess.call(['ffmpeg', '-i', f, '-c', 'copy', '-y', '-an', f2]) # remove audio from the original video                                  if(audiofilename != '' and noAudio == 1 and newAudio.get() == 1):                         subprocess.call(['ffmpeg', '-i', f2, '-i', audiofilename, '-shortest', '-c:v', 'copy', '-c:a', 'aac', '-b:a', '256k', '-y', f]) # add audio to the original video, trim either the audio or video depends on which one is longer                  subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=' + str(contrast_variable.get()) +':brightness='+ str(brightness_variable.get()) +':saturation=' + str(saturation_variable.get()) +':gamma='+ str(gamma_variable.get()), '-y', f2]) # adjust the saturation, gamma, contrast and brightness of video                 #subprocess.call(['ffmpeg', '-i', f, '-y', f2]) # converting the video with ffmpeg                   action_vid = tk.Button(buttonFrame, text="Open Video", command=openVideo) action_vid.pack(fill=X)  win.mainloop() 

Tomorrow we will continue to edit this video editing software so stay tuned.

Planet Python

codingdirectional: Tidy up the user interface of the video editing application

Hello and welcome back, it has been a day since the last post and today we will continue to edit our video application project. After I have included the final feature for this project I can now concentrate on the user interface part. Mine ideology is always to focus on the main objective first before working on the small details when it comes to programming, as long as we have destroyed the main battleship then it will easy to take on those small battleships that have lost their main supply line.

In this article, we will create the below user interface which consists of a button to select the video file, a checkbox to remove the audio and another checkbox for adding new audio.

The new user interface

Below is the entire program.

 from tkinter import * from tkinter import filedialog import os import subprocess import tkinter.ttk as tk  win = Tk() # Create instance win.title("NeWw Vid") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change background color  mainframe = Frame(win) # create a frame mainframe.pack()  buttonFrame = Frame(win) # create a button frame buttonFrame.pack(side = BOTTOM, fill=X)  #  Create a label #aLabel = Label(win, text="Select video size and video", anchor="center", padx=13, pady=10, relief=RAISED) #aLabel.grid(column=0, row=0, sticky=W+E) #aLabel.configure(foreground="black") #aLabel.configure(background="white") #aLabel.configure(wraplength=110)  # Create a combo box vid_size = StringVar() # create a string variable preferSize = tk.Combobox(mainframe, textvariable=vid_size)  preferSize['values'] = (1920, 1280, 854, 640) # video width in pixels preferSize.grid(column=0, row=1) # the position of combo box preferSize.current(0) # select item one  preferSize.pack(side = LEFT, expand = TRUE)  removeAudioVal = IntVar() removeAudio = tk.Checkbutton(mainframe, text="Remove Audio", variable=removeAudioVal) removeAudio.pack(side = LEFT, padx=3)  newAudio = IntVar() aNewAudio = tk.Checkbutton(mainframe, text="New Audio", variable=newAudio) aNewAudio.pack(side = LEFT, padx=2)  # Open a video file def openVideo():                  fullfilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Video file", "*.mp4; *.avi ")]) # select a video file from the hard drive                  if(newAudio.get() == 1):                 audiofilename = filedialog.askopenfilename(initialdir="/", title="Select a file", filetypes=[("Audio file", "*.wav; *.ogg ")]) # select a new audio file from the hard drive                          if(fullfilename != ''):                   scale_vid = preferSize.get() # retrieve value from the comno box                 new_size = str(scale_vid)                 dir_path = os.path.dirname(os.path.realpath(fullfilename))                 os.chdir(dir_path)                 f = new_size  + '.mp4' # the new output file name/format                 f2 = f + '.mp4' # webm video                  noAudio = removeAudioVal.get() # get the checkbox state for audio                   #subprocess.call(['ffmpeg', '-stream_loop', '2', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize and loop the video with ffmpeg                 #subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', '-r', '24', f]) # resize and speed up the video with ffmpeg                                 #subprocess.call(['ffmpeg', '-i', f, '-ss', '00:02:30', '-y', f2]) # create animated gif starting from 2 minutes and 30 seconds to the end                 subprocess.call(['ffmpeg', '-i', fullfilename, '-vf', 'scale=' + new_size + ':-1', '-y', f]) # resize the video with ffmpeg                  if(noAudio == 1):                         subprocess.call(['ffmpeg', '-i', f, '-c', 'copy', '-y', '-an', f2]) # remove audio from the original video                                  if(audiofilename != '' and noAudio == 1):                         subprocess.call(['ffmpeg', '-i', f2, '-i', audiofilename, '-shortest', '-c:v', 'copy', '-c:a', 'aac', '-b:a', '256k', '-y', f]) # add audio to the original video, trim either the audio or video depends on which one is longer                  #subprocess.call(['ffmpeg', '-i', f, '-vf', 'eq=contrast=1.3:brightness=-0.03:saturation=0.01', '-y', f2]) # adjust the saturation contrast and brightness of video                 #subprocess.call(['ffmpeg', '-i', f, '-y', f2]) # converting the video with ffmpeg                   action_vid = tk.Button(buttonFrame, text="Open Video", command=openVideo) action_vid.pack(fill=X)  win.mainloop() 

Not bad for now, we will continue to modify the user interface in the next chapter. Below is the new video which this program has created.

http://islandstropicalman.tumblr.com/post/182129073622/music-with-a-lot-of-ants

Planet Python