10/12/2015
03/12/2015
[TIBCO Spotfire] Export data table as CSV via IronPython
Here's a sample script to programmatically via IronPython export a data table as CSV
from Spotfire.Dxp.Data.Export import DataWriterTypeIdentifiers
from System.IO import File
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.ExcelXlsDataWriter)
table = Document.ActiveDataTableReference #OR pass the DataTable as parameter
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet() #OR pass the filter
stream = File.OpenWrite("PATH/NAME.csv")
names = []
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
Note: the path you pass to the File.OpenWrite function is relative to the machine running the analysis; this means that on the Web Player, unless you find a way to stream the data back to the user browser, the end user will never receive the file
from Spotfire.Dxp.Data.Export import DataWriterTypeIdentifiers
from System.IO import File
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.ExcelXlsDataWriter)
table = Document.ActiveDataTableReference #OR pass the DataTable as parameter
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet() #OR pass the filter
stream = File.OpenWrite("PATH/NAME.csv")
names = []
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
Note: the path you pass to the File.OpenWrite function is relative to the machine running the analysis; this means that on the Web Player, unless you find a way to stream the data back to the user browser, the end user will never receive the file
[TIBCO Spotfire] Mark rows via IronPython
Here's a sample script to programmatically via IronPython mark rows with a particular value (set GRAPH as script parameter pointing to the visualization you want to work on):
from Spotfire.Dxp.Data import DataPropertyClass
from Spotfire.Dxp.Application.Visuals import VisualContent
from Spotfire.Dxp.Data import IndexSet
from Spotfire.Dxp.Data import RowSelection
from Spotfire.Dxp.Data import DataValueCursor
from System import String
# get object reference
vc = GRAPH.As [VisualContent] ()
dataTable = vc.Data.DataTableReference
# get marking
marking = vc.Data.MarkingReference
rowCount = dataTable.RowCount
rowsToInclude = IndexSet (rowCount, True)
rowsToSelect = IndexSet (rowCount, False)
cursor1 = DataValueCursor.CreateFormatted (dataTable.Columns ["COLUMN_NAME"])
#Find Records by looping through all rows
idx = 0
for row in dataTable.GetRows (rowsToInclude, cursor1):
found = False
aTag = cursor1.CurrentValue
print aTag
# if there’s a match, mark it
if aTag == VALUE:
rowsToSelect [idx] = True
print idx
idx = idx + 1
#Set Marking
marking.SetSelection (RowSelection (rowsToSelect), dataTable)
from Spotfire.Dxp.Data import DataPropertyClass
from Spotfire.Dxp.Application.Visuals import VisualContent
from Spotfire.Dxp.Data import IndexSet
from Spotfire.Dxp.Data import RowSelection
from Spotfire.Dxp.Data import DataValueCursor
from System import String
# get object reference
vc = GRAPH.As [VisualContent] ()
dataTable = vc.Data.DataTableReference
# get marking
marking = vc.Data.MarkingReference
rowCount = dataTable.RowCount
rowsToInclude = IndexSet (rowCount, True)
rowsToSelect = IndexSet (rowCount, False)
cursor1 = DataValueCursor.CreateFormatted (dataTable.Columns ["COLUMN_NAME"])
#Find Records by looping through all rows
idx = 0
for row in dataTable.GetRows (rowsToInclude, cursor1):
found = False
aTag = cursor1.CurrentValue
print aTag
# if there’s a match, mark it
if aTag == VALUE:
rowsToSelect [idx] = True
print idx
idx = idx + 1
#Set Marking
marking.SetSelection (RowSelection (rowsToSelect), dataTable)
[TIBCO Spotfire] Filter handling via IronPython - set value
Here is a sample script to programatically via IronPython set a value for a filter in Spotfire; if the value does not exist in the allowed filter values, set it to the maximum possible
import Spotfire.Dxp.Application.Filters as filters
import Spotfire.Dxp.Application.Filters.ListBoxFilter
from Spotfire.Dxp.Application.Filters import FilterTypeIdentifiers
#use following line if the filter is to be applied to the currently active page - CAUTION, as we might alter the filter for a different filtering scheme then!
myPanel = Document.ActivePageReference.FilterPanel
# alternatively use following line to se it for a specific page
#myPanel = myPage.FilterPanel
#where myPage is a script parameter that points to the page we want it to work on
myFilter = myPanel.TableGroups[0].GetFilter("FILTER_NAME")
lbFilter = myFilter.FilterReference.As[filters.ItemFilter]()
if VALUE in lbFilter.Values:
lbFilter.Value = VALUE
else:
lbFilter.Value = max(lbFilter.Values)
import Spotfire.Dxp.Application.Filters as filters
import Spotfire.Dxp.Application.Filters.ListBoxFilter
from Spotfire.Dxp.Application.Filters import FilterTypeIdentifiers
#use following line if the filter is to be applied to the currently active page - CAUTION, as we might alter the filter for a different filtering scheme then!
myPanel = Document.ActivePageReference.FilterPanel
# alternatively use following line to se it for a specific page
#myPanel = myPage.FilterPanel
#where myPage is a script parameter that points to the page we want it to work on
myFilter = myPanel.TableGroups[0].GetFilter("FILTER_NAME")
lbFilter = myFilter.FilterReference.As[filters.ItemFilter]()
if VALUE in lbFilter.Values:
lbFilter.Value = VALUE
else:
lbFilter.Value = max(lbFilter.Values)
[TIBCO Spotfire] Cumulative sum
Here's a simple formula to plot the cumulative sum (trend) of data in Spotfire
Sum([COLUMN]) OVER (intersect(AllPrevious([Axis.Rows]),[GROUP_BY_1],...,[GROUP_BY_N])) as [ALIAS]
Just put it on the X-axis and you're set. The grouping is optional
Sum([COLUMN]) OVER (intersect(AllPrevious([Axis.Rows]),[GROUP_BY_1],...,[GROUP_BY_N])) as [ALIAS]
Just put it on the X-axis and you're set. The grouping is optional
07/11/2015
[TIBCO Spotfire] OSI PI data source template
Here is a sample connection template to add OSI PI as Data Source in the Spotfire Server:
NOTE: Depending on your connection needs (using Windows Integrated Authentication OR Username/Password), you need to change the connection url pattern accordingly. This means that you CANNOT leave both connection-url-pattern sections from the sample above when you add the template, BUT nonetheless, the user CAN change the connection URL to either one of those values as per his needs when he sets up the connection
Remember to read the Spotfire documentation (chapter 11.5) on how to set it up correctly!
<!-- WARNING!!!! Old PI JDBC driver versions have .NET dependencies (RDSAWrapper.dll) - https://techsupport.osisoft.com/Troubleshooting/KB/KB00494 -->
<jdbc-type-settings>
<type-name>PI</type-name>
<driver>com.osisoft.jdbc.Driver</driver>
<!-- SQLDASServer is the OSI component we should connect to, PIServer is the name of the PI Server we want to access AS REGISTERED ON THE DAS server -->
<connection-url-pattern>jdbc:pisql://SQLDASServer/Data Source=PIServer;Integrated Security=SSPI</connection-url-pattern>
<!-- to use PI authentication instead of Windows Integrated Authentication, use this url:
<connection-url-pattern>jdbc:pisql://SQLDASServer/Data Source=PIServer;User ID=Username;Password=Password</connection-url-pattern>
-->
<supports-catalogs>true</supports-catalogs>
<supports-schemas>false</supports-schemas>
<supports-procedures>false</supports-procedures>
<use-ansii-style-outer-join>true</use-ansii-style-outer-join>
</jdbc-type-settings>
NOTE: Depending on your connection needs (using Windows Integrated Authentication OR Username/Password), you need to change the connection url pattern accordingly. This means that you CANNOT leave both connection-url-pattern sections from the sample above when you add the template, BUT nonetheless, the user CAN change the connection URL to either one of those values as per his needs when he sets up the connection
Remember to read the Spotfire documentation (chapter 11.5) on how to set it up correctly!
[TIBCO Spotfire] SAP HANA data source template
Here is a sample connection template to add SAP HANA as Data Source in the Spotfire Server:
Remember to read the Spotfire documentation (chapter 11.5) on how to set it up correctly!
<jdbc-type-settings>
<type-name>hana</type-name>
<driver>com.sap.db.jdbc.Driver</driver>
<connection-url-pattern>jdbc:sap://host:port?reconnect=true;</connection-url-pattern>
<ping-command>select 1 from dummy</ping-command>
<supports-catalogs>true</supports-catalogs>
<supports-schemas>true</supports-schemas>
<supports-procedures>true</supports-procedures>
<table-types>TABLE, CALC VIEW, OLAP VIEW, JOIN VIEW, HIERARCHY VIEW, VIEW</table-types>
</jdbc-type-settings>
Remember to read the Spotfire documentation (chapter 11.5) on how to set it up correctly!
[TIBCO Spotfire] Excel JDBC data source template
If you want to take full advantage of the Information Designer capabilities when connecting to non-default data sources, and your analysis can't be fully developed using only Connectors and/or ADS, you might want to create your own data source template. [1]
The idea is simple: as long as you have a proper JDBC driver and can supply basic details on how to connect and handle operations, Spotfire Server allows you to add a custom connection template that end users will be able to select when creating a new Data Source. [2]
Here is a sample connection template for Excel. Note that in this case you will need the ODBC driver as well.
[1] Data source template documentation - Chapter 11.5
[2] Sample guidelines to add Attivio as data source
The idea is simple: as long as you have a proper JDBC driver and can supply basic details on how to connect and handle operations, Spotfire Server allows you to add a custom connection template that end users will be able to select when creating a new Data Source. [2]
Here is a sample connection template for Excel. Note that in this case you will need the ODBC driver as well.
<jdbc-type-settings>
<!-- Informative name and display-name -->
<type-name>ODBC-EXCEL</type-name>
<!-- point out the JDBC-ODBC bridge -->
<driver>sun.jdbc.odbc.JdbcOdbcDriver</driver>
<!-- Pattern displayed to administrator when setting up the Datasource -->
<connection-url-pattern>jdbc:odbc:excel-odbc-source</connection-url-pattern>
<!-- Table types allowed for EXCEL is TABLE and SYSTEM TABLE -->
<table-types>TABLE, SYSTEM TABLE</table-types>
<!-- As ping command we will use the integer constant 1. Could really be any pseudo function. -->
<ping-command>SELECT 1</ping-command>
<connection-properties />
<!-- Excel does not support catalogs nor schemas -->
<supports-catalogs>false</supports-catalogs>
<supports-schemas>false</supports-schemas>
<!-- Found an error in excel not allowing Distinct in combination with order-by on all columns. -->
<!-- Choice between supporting order by and distinct in favor for distinct -->
<!-- to make prompt without duplicates and support distinct conditioning. -->
<supports-order-by>false</supports-order-by>
<!-- Format pattern for date, time and datetime (same as timestamp). -->
<date-literal-format-expression>{d '$$value$$'}</date-literal-format-expression>
<time-literal-format-expression>{t '$$value$$'}</time-literal-format-expression>
<date-time-literal-format-expression>{ts '$$value$$'}</date-time-literal-format-expression>
</jdbc-type-settings>
[1] Data source template documentation - Chapter 11.5
[2] Sample guidelines to add Attivio as data source
01/11/2015
[Java Spring] Plan my Groove
Plan my Groove is a simple Java7+ application to expose a REST API to let users remotely execute Groovy scripts
It was tested on both JDK7 and JDK8 and as of now it exposes functionality to:
Where a job is a simple Groovy script. It's possible to submit multiple jobs simultaneously and also run them at the same time.
It relies heavily on the RepositoryRestResource functionality offered by the Spring framework to provide a set of RESTful services.
Find the source code at https://github.com/steghio/PlanMyGroove and for more information, read the documentation at https://github.com/steghio/PlanMyGroove/blob/master/plan_my_groove_manual.pdf
It was tested on both JDK7 and JDK8 and as of now it exposes functionality to:
- submit a job
- list and search for jobs
- retrieve the result of a job
- delete a job
Where a job is a simple Groovy script. It's possible to submit multiple jobs simultaneously and also run them at the same time.
It relies heavily on the RepositoryRestResource functionality offered by the Spring framework to provide a set of RESTful services.
Find the source code at https://github.com/steghio/PlanMyGroove and for more information, read the documentation at https://github.com/steghio/PlanMyGroove/blob/master/plan_my_groove_manual.pdf
13/09/2015
[Java] Better Fibonacci algorithms
The last time we saw a good Fibonacci algorithm, which is better than the textbook one, but still falls short of its duties for numbers around 100.
So here are a couple others algorithms which can efficiently compute big Fibonacci numbers correctly. I was able to test them up to around 10 millions, after which it took more than 30 seconds on my machine to produce a solution, which wasn't always correct.
12/09/2015
[Java] Good Fibonacci algorithm
Here is an efficient implementation of the Fibonacci algorithm by Michael Goodrich. I found it to be very fast and you should think about using it instead of the textbook implementation of the algorithm which is highly unoptimized and will start crashing pretty much immediately when you raise the number you want to compute by little.
You will have the answer for fibGood(n) in fibGood(n)[0]
public static long[] fibGood(int n) {
if (n < = 1) {
long[] answer = {n,0};
return answer;
} else {
long[] tmp = fibGood(n-1);
long[] answer = {tmp[0] + tmp[1], tmp[0]};
return answer;
}
}
You will have the answer for fibGood(n) in fibGood(n)[0]
03/08/2015
[Spotfire] Send JMS message to EMS with IronPython script
Want to take your Spotfire analysis to the next level? What about adding an extra level of interactivity to it?
Maybe you also have a BusinessWorks or BusinessEvents engine somewhere feeding data to it, and you spot something that requires your intervention, or maybe you want to replay a flow for some reason. Sound like you need to have Spotfire communicate with those engines.
HTTP? Sure it works. What if you prefer JMS instead because you also have your own EMS server?
Maybe you also have a BusinessWorks or BusinessEvents engine somewhere feeding data to it, and you spot something that requires your intervention, or maybe you want to replay a flow for some reason. Sound like you need to have Spotfire communicate with those engines.
HTTP? Sure it works. What if you prefer JMS instead because you also have your own EMS server?
Tag:
EMS,
HowTo,
IronPython,
JMS,
Python,
Source code,
Spotfire,
TIBCO
[APT] Fix "some indexes failed to download" error
If you ever encounter this APT error:
some indexes failed to download (E: Some index files failed to download. They have been ignored, or old ones used instead.)
It might mean that your local APT info somehow was messed up but you can easily fix this by purging it and asking APT to update it again:
sudo rm -vf /var/lib/apt/lists/*
apt-get update
some indexes failed to download (E: Some index files failed to download. They have been ignored, or old ones used instead.)
It might mean that your local APT info somehow was messed up but you can easily fix this by purging it and asking APT to update it again:
sudo rm -vf /var/lib/apt/lists/*
apt-get update
[Oracle] Purge schema
Purging a schema in Oracle isn't a straightforward procedure. Usually it's better to DROP the schema or the USER and recreate it.
But if you do not have the permissions to do that, or have other restrictions preventing you to perform the operation, you might find this piece of SQL code useful:
SELECT 'drop '||object_type||' '||object_name||' '||DECODE(object_type,'TABLE', ' cascade constraints;', ';') FROM USER_OBJECTS
This will generate drop statements for ALL objects in the schema it's run on. Just execute it after connecting as the user whose schema you want to purge, then copy the output and run it as script.
But if you do not have the permissions to do that, or have other restrictions preventing you to perform the operation, you might find this piece of SQL code useful:
SELECT 'drop '||object_type||' '||object_name||' '||DECODE(object_type,'TABLE', ' cascade constraints;', ';') FROM USER_OBJECTS
This will generate drop statements for ALL objects in the schema it's run on. Just execute it after connecting as the user whose schema you want to purge, then copy the output and run it as script.
[Python] HTTP POST
Here is a sample piece of code on how to issue HTTP POST requests with an XML payload from Python
URI = 'https://httpbin.org/post'
PARAMETERS="<NODE>VALUE</NODE>"
from System.Net import WebRequest
from System.Text import Encoding
request = WebRequest.Create(URI)
request.ContentType = "text/xml"
request.Method = "POST"
bytes = Encoding.ASCII.GetBytes(PARAMETERS)
request.ContentLength = bytes.Length
reqStream = request.GetRequestStream()
reqStream.Write(bytes, 0, bytes.Length)
reqStream.Close()
response = request.GetResponse()
from System.IO import StreamReader
result = StreamReader(response.GetResponseStream()).ReadToEnd()
print result
[Java] untar
Now that we know how to tar in Java, let's see how to unzip using the same Apache Commons Compress library:
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.OutputStream;
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.archivers.tar.TarArchiveInputStream;
import org.apache.commons.compress.utils.IOUtils;
public void untar(String tarlocation, String tarname, String untarlocation){
//method variables
private File tarFile;
private TarArchiveInputStream tar_is;
private FileInputStream fin;
private TarArchiveEntry entry;
private File entryDestination;
private OutputStream out;
tarFile = new File(tarlocation+tarname);
fin = new FileInputStream(tarFile);
tar_is = new TarArchiveInputStream(fin);
//untar
//for each reference, untar to untarlocation, creating directories if needed
while ((entry = tar_is.getNextTarEntry()) != null) {
entryDestination = new File(untarlocation, entry.getName());
//if necessary create dir structure
entryDestination.getParentFile().mkdirs();
if (entry.isDirectory())
entryDestination.mkdirs();
else {
try{
//untar current entry
out = new FileOutputStream(entryDestination);
IOUtils.copy(tar_is, out);
}
catch(Exception e){
throw e;
}
finally{
//close streams ignoring exceptions
IOUtils.closeQuietly(out);
IOUtils.closeQuietly(fin);
IOUtils.closeQuietly(tar_is);
}
}
}
}
[Java] Tar file or folders
A simple way to tar a file or folder (with or without subdirectories) maintaining the folders structure is using the Apache Commons Compress library.
It has to be recursive so that we can handle subdirectories correctly. The resulting tarred file will untar to the same exact folder structure originally tarred. If you pass the location and tarlocation parameters with the path separator already appended, there's no need to concatenate File.separator in the code.
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.nio.file.Files;
import org.apache.commons.compress.archivers.ArchiveOutputStream;
import org.apache.commons.compress.archivers.ArchiveStreamFactory;
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.utils.IOUtils;
public void tar(String location, String name, String tarlocation, String tarname){
//method variables
private OutputStream out;
private ArchiveOutputStream tar_out;
private FileInputStream tmp_fis;
//out writes the final file, tar_out creates the tar archive
out = new FileOutputStream(new File(tarlocation+File.separator+tarname+".tar"));
tar_out = new ArchiveStreamFactory().createArchiveOutputStream(ArchiveStreamFactory.TAR, out);
//tar it
File f = new File(location+File.separator+name);
//first time baseDir is empty
dotar(f, "");
//close archive
tar_out.finish();
out.close();
}
//aux method for tarring
private void dotar(File myFile, String baseDir) throws Exception{
//maintain the directory structure while tarring
String entryName = baseDir+myFile.getName();
//DO NOT do a putArchiveEntry for folders as it is not needed
//if it's a directory then list and tar the contents. Uses recursion for nested directories
if(myFile.isDirectory() == true){
File[] filesList = myFile.listFiles();
if(filesList != null){
for (File file : filesList) {
dotar(file, entryName+File.separator);
}
}
}
else{
//add file
tmp_fis = new FileInputStream(myFile);
try{
tar_out.putArchiveEntry(new TarArchiveEntry(myFile, entryName));
IOUtils.copy(tmp_fis, tar_out);
tar_out.closeArchiveEntry();
}
catch(Exception e){
throw e;
}
finally{
if(tmp_fis != null) tmp_fis.close();
}
}
}
It has to be recursive so that we can handle subdirectories correctly. The resulting tarred file will untar to the same exact folder structure originally tarred. If you pass the location and tarlocation parameters with the path separator already appended, there's no need to concatenate File.separator in the code.
14/07/2015
[Oracle] wget or curl binaries from OTN website
Usually when you try to download Oracle's JDK or Oracle's DB driver jars you need to manually accept a license agreement before you are allowed to download them.
This obviously is not possible to do when trying to get those resources using wget or curl, but you can work around this by adding a cookie to your request saying that you do accept the licence:
wget --no-cookies --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" "RESOURCE URL"
Where RESOURCE URL for latest JDK7 for example is: http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz
If you do not do this, you'll just download a file saying that you need to accept the licence instead
This obviously is not possible to do when trying to get those resources using wget or curl, but you can work around this by adding a cookie to your request saying that you do accept the licence:
wget --no-cookies --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" "RESOURCE URL"
Where RESOURCE URL for latest JDK7 for example is: http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz
If you do not do this, you'll just download a file saying that you need to accept the licence instead
[Java] Gzip deflate and inflate
Here's a simple class to deflate and inflate Gzip files in Java. It uses Apache Commons compress (those methods are also in Apache Commons library) for some utility methods, but they can be replaced with pure Java versions.
As you can see, all usual checks have been omitted but I'm sure you'll check for file existence, etc.. beforehand
import org.apache.commons.compress.utils.IOUtils;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
public static class GZ{
public static void gzDeflate(String in, String out){
try{
FileInputStream fin = new FileInputStream(in);//file to compress
FileOutputStream fout = new FileOutputStream(out);//output file
GZIPOutputStream gz_out = new GZIPOutputStream(fout);//gzipped file
//creates gzip
IOUtils.copy(fin, gz_out);
/*if you do not want to use Apache libraries, you can read bytes from fin and write them to gz_out in a loop*/
}
catch(Exception e){
//do something
}
finally{
//DO NOT close in any other order!!
IOUtils.closeQuietly(gz_out);
IOUtils.closeQuietly(fout);
IOUtils.closeQuietly(fin);
/*if you do not want to use Apache libraries you can check if object is not null and close it*/
}
}
public static void gzInflate(String in, String out){
try{
FileInputStream fin = new FileInputStream(in);//file to decompress
GZIPInputStream gz_in = new GZIPInputStream(fin);
FileOutputStream fout = new FileOutputStream(out);//output file
//inflates gzip
IOUtils.copy(gz_in, fout);
/*if you do not want to use Apache libraries, you can read bytes from gz_in and write them to fout in a loop*/
}
catch(Exception e){
//do something
}
finally{
//DO NOT close in any other order!!
IOUtils.closeQuietly(fout);
IOUtils.closeQuietly(gz_in);
IOUtils.closeQuietly(fin);
/*if you do not want to use Apache libraries you can check if object is not null and close it*/
}
}
}
As you can see, all usual checks have been omitted but I'm sure you'll check for file existence, etc.. beforehand
10/07/2015
[TIBCO] BusinessWorks 6 Compress Plugin for TAR and GZ
Happy to announce the availability of the 1.2.0.FINAL version of the TIBCO BW6 Compress palette. It supports ZIP, GZ and TAR formats. This is developed in my free time and it's not endorsed, verified or supported by TIBCO in any way (yet?).
It uses Apache Common Compress library 1.9 with no modifications and it's packaged within the plugin.
Online docs are (temporarily) available at: http://digilander.libero.it/otacoconvention/archivi/TIBCO_BW6_Compress_plugin_doc/index.html and can also be accessed from BusinessStudio by selecting the activity and pressing F1 (requires internet connection).
I would urge anyone wanting to try this to take all necessary precautions even though the version number says FINAL
Check GitHub https://github.com/steghio/TIBCO_BW6_Compress_Plugin/
for the source code, sample project, Eclipse (BW6) installer. If you want to manipulate it, READ THE GUIDE first.
History: first release 1.0.0.FINAL
It uses Apache Common Compress library 1.9 with no modifications and it's packaged within the plugin.
Online docs are (temporarily) available at: http://digilander.libero.it/otacoconvention/archivi/TIBCO_BW6_Compress_plugin_doc/index.html and can also be accessed from BusinessStudio by selecting the activity and pressing F1 (requires internet connection).
I would urge anyone wanting to try this to take all necessary precautions even though the version number says FINAL
Check GitHub https://github.com/steghio/TIBCO_BW6_Compress_Plugin/
for the source code, sample project, Eclipse (BW6) installer. If you want to manipulate it, READ THE GUIDE first.
History: first release 1.0.0.FINAL
04/07/2015
[Oracle] Remove tablespace with missing DBF file
Well, nobody's perfect. But if a software is good we can afford not to be flawless.
Say instead of dropping a tablespace the proper way from your Oracle DB, you deleted its DBF file instead; how can you make Oracle forget about this and let you create a new one with the same name and location?
Luckily, you can still salvage the situation by issuing some commands while connected as sys:
SELECT * FROM sys.dba_data_files;
Now find your tablespace and copy the value from the FILE_NAME column, then delete the file association:
ALTER DATABASE DATAFILE 'file_name_we_got_before' OFFLINE DROP;
Finally, drop the tablespace itself:
DROP TABLESPACE your_tablespace INCLUDING CONTENTS;
And you're back in business
Say instead of dropping a tablespace the proper way from your Oracle DB, you deleted its DBF file instead; how can you make Oracle forget about this and let you create a new one with the same name and location?
Luckily, you can still salvage the situation by issuing some commands while connected as sys:
SELECT * FROM sys.dba_data_files;
Now find your tablespace and copy the value from the FILE_NAME column, then delete the file association:
ALTER DATABASE DATAFILE 'file_name_we_got_before' OFFLINE DROP;
Finally, drop the tablespace itself:
DROP TABLESPACE your_tablespace INCLUDING CONTENTS;
And you're back in business
26/06/2015
[TIBCO] BusinessWorks 6 Compress Plugin for ZIP
UPDATE: check latest version 1.2.0.FINAL here
Happy to announce the availability of the 1.0.0.FINAL version of the TIBCO BW6 Compress palette. It includes Zip and Unzip activities. This is developed in my free time and it's not endorsed, verified or supported by TIBCO in any way (yet?).
It uses Apache Common Compress library 1.8.1 with no modifications and it's packaged within the plugin.
Happy to announce the availability of the 1.0.0.FINAL version of the TIBCO BW6 Compress palette. It includes Zip and Unzip activities. This is developed in my free time and it's not endorsed, verified or supported by TIBCO in any way (yet?).
It uses Apache Common Compress library 1.8.1 with no modifications and it's packaged within the plugin.
[PostgreSQL] Post install error "Password authentication failed for user" when connecting to DB
So you decided that MySQL isn't to your liking and since the other alternatives aren't free enough, you tried out PostgreSQL. Freshly after your installation, you happily try to connect to your DB either via command line (\q guys.. \q seriously?) or pgAdmin, but are baffled when this message appears on screen:
FATAL: Password authentication failed for user "Administrator"
because you did not remember creating a user during installation, and the only password you were asked to define isn't working.
The solution? Try to connect as postgres user:
psql -U postgres
If it's still not working, check under postgresql.conf if the listen_addresses parameter is specifying a correct and reachable address.
FATAL: Password authentication failed for user "Administrator"
because you did not remember creating a user during installation, and the only password you were asked to define isn't working.
The solution? Try to connect as postgres user:
psql -U postgres
If it's still not working, check under postgresql.conf if the listen_addresses parameter is specifying a correct and reachable address.
[SQL] Oracle subquery in join statement
In Oracle, it's possible to use sub queries in a join statement by giving an alias to the subquery and joining on that alias:
SELECT a.column1, a.column2, c.column3
FROM a JOIN (
SELECT b.column1, b.column2, b.column3
FROM b
) c
ON (a.column1 = c.column1 AND a.column2 = c.column2)
Obviously you would never write a SIMPLE query EXACTLY as the example above, it's just to show the mechanics when you actually need to create a slightly more complex one
Tag:
HowTo,
Oracle,
PL/SQL,
Source code,
SQL
06/06/2015
[OSGi] Spring error Unable to locate Spring NamespaceHandler for XML schema namespace
You're working with OSGi because it's flexible, you like components, and so on, and you also like Spring because you love writing XMLs instead of Java code.
You think it would be a good idea if you could join these technologies but reading is so boring, plus since both use some config files you just need to mash them together and you should be good with your Sprosgienstein creation. But then you realize that some small differences for example in classpath handling and dependency management are making your life difficult.
A common error you might get is: "Unable to locate Spring NamespaceHandler for XML schema namespace XXX"
You think it would be a good idea if you could join these technologies but reading is so boring, plus since both use some config files you just need to mash them together and you should be good with your Sprosgienstein creation. But then you realize that some small differences for example in classpath handling and dependency management are making your life difficult.
A common error you might get is: "Unable to locate Spring NamespaceHandler for XML schema namespace XXX"
22/05/2015
[Windows Server 2012] Add role fails
When trying to add a role (eg ActiveDirectory) to a Windows Server 2012 machine, I found that it would try to enable the components, but it would always fail in the end saying that the machine needed a reboot. It doesn't matter how many reboots you do, it will still fail.
The reason was that these services had to be enabled before trying to add the new role:
The reason was that these services had to be enabled before trying to add the new role:
- Server
- Workstation
- Computer Browser
- TCP/IP NetBIOS Helper
[Linux] Check remote port connection
It may happen that you're working on a server with a user that has very limited capabilities. Sometimes you have no access to telnet:
telnet host port
nmap:
nmap -A host -p port
curl - it will complain that it did not receive data, meaning that it connected successfully:
curl http://host:port
netcat:
nc host port
or whatever other tool you like to use. So what to do? Well maybe, you have access to bash:
cat < /dev/tcp/host/port
but I find this method not very reliable. Instead you might have access to Python as well:
python
import socket
test_conn=socket.create_connection(('host',port))
telnet host port
nmap:
nmap -A host -p port
curl - it will complain that it did not receive data, meaning that it connected successfully:
curl http://host:port
netcat:
nc host port
or whatever other tool you like to use. So what to do? Well maybe, you have access to bash:
cat < /dev/tcp/host/port
but I find this method not very reliable. Instead you might have access to Python as well:
python
import socket
test_conn=socket.create_connection(('host',port))
12/04/2015
[Java] JMX connection and bean operation invocation
In this example we'll see how to create a JMX connection, with or without authentication, and invoke a bean method.
An important thing to note, is to always remember to check first if the user calling the method has the required permissions to invoke it
import javax.management.remote.JMXServiceURL;
import javax.management.MBeanAttributeInfo;
import javax.management.MBeanInfo;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import java.util.Hashtable;
public class myJMXConnection{
private static String HOST = "HOST";
private static String PORT = "PORT"; //usually it's 1099
private static String BEAN_NAME = "BEAN_NAME"; //com.groglogs.myclass:type=MyObject
private static String BEAN_OP = "BEAN_OPERATION";
private static String USER = "USERNAME";
private static String PASS = "PASSWORD";
public static void main(String[] args) throws Exception{
try{
JMXServiceURL target = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://" + HOST + ":" + PORT+"/jmxrmi");
//only if authentication is required
Hashtable<String, String[]> env = new Hashtable<String, String[]>();
String[] credentials = new String[] {USER,PASS};
env.put(JMXConnector.CREDENTIALS, credentials);
JMXConnector connector = JMXConnectorFactory.connect(target, env);
//if not required, use simply JMXConnector connector = JMXConnectorFactory.connect(target);
MBeanServerConnection remote = connector.getMBeanServerConnection();
ObjectName bean = new ObjectName(BEAN_NAME);
remote.invoke(bean, BEAN_OP, null, null);//invoke the remote method. Ensure first that the user calling this method has the necessary rights to invoke it!
connector.close();
}catch(Exception e){
System.out.println(e.getMessage());
System.exit(0);
}
}
}
An important thing to note, is to always remember to check first if the user calling the method has the required permissions to invoke it
22/03/2015
08/03/2015
[Guitar tab] Land of the Livin dead - Rayman Origins
This is possibly the best music theme from the game, tip of the hat to both Christophe Héral and Billy Martin.
The original tab was found on VideogameJam.
Download it from here
The original tab was found on VideogameJam.
Download it from here
15/02/2015
[CentOS 7] Chrome use system theme fix
After installing Google Chrome on CentOS 7 with the default theme enabled, you may notice that it will display the mouse pointer in the default Xorg theme instead of blending in with the system.
A workaround is to edit the /usr/share/icons/default/index.theme file by changing
Inherits=dmz-aa
to
Inherits=Adwaita
A workaround is to edit the /usr/share/icons/default/index.theme file by changing
Inherits=dmz-aa
to
Inherits=Adwaita
13/02/2015
[Linux] Boot in console mode
So you absolutely wanted that fancy video driver to work, but it didn't. Now it's impossible to boot into your system, even with the Ctrl+Alt+FX combination to get a tty to show up.
You can however boot directly into console mode. From the GRUB menu, before selecting a line to boot, press the "e" key.
Find the line you were going to boot (usually starts with "linux") and replace:
rgba quiet or rgbh quiet or the similar parameter you may have
with:
text 3
or another number which will be the runlevel of your choice.
Then press F10 to boot from that configuration or save, select the modified line and press enter.
Once you're logged in your console you can always try to start the graphical environment with:
startx
You can however boot directly into console mode. From the GRUB menu, before selecting a line to boot, press the "e" key.
Find the line you were going to boot (usually starts with "linux") and replace:
rgba quiet or rgbh quiet or the similar parameter you may have
with:
text 3
or another number which will be the runlevel of your choice.
Then press F10 to boot from that configuration or save, select the modified line and press enter.
Once you're logged in your console you can always try to start the graphical environment with:
startx
[CentOS] Detect Windows installation and update GRUB
After installing CentOS on your wannabe dual-boot machine, you realize with horror that for very important and critical reasons, your system does not have a dual boot menu and it's not able to recognize and read the Windows partition.
The solution is luckily very simple:
yum install epel-release
The solution is luckily very simple:
yum install epel-release
yum install ntfs-3g
This will allow the system to correctly manage NTFS filesystems.
grub2-mkconfig -o /boot/grub2/grub.cfg
This will update GRUB so that it shows the dual boot options now that it's able to recognize the Windows partition.
grub2-set-default X
This is optional and it's used to set the X kernel/OS as default when starting the system. You can find the number by reading the file /boot/grub2/grub.cfg
17/01/2015
[Vim] Remove empty lines from file
Once you learn the magic :q! combination to close Vim discarding changes, you find out that it is a quite powerful tool.
To remove empty lines from a file you might try with some specific commands such as:
:g/^$/d
Where g tells vim to execute a command only on the lines that match a regular expression, ^$ is said regular expression to match empty (blank) lines, and d is the delete command.
Bonus: if when you open the file you see a lot of ^M characters, it means you're editing it in Unix format but the file was created in DOS format. You can either try by telling vim to treat it as a DOS format before running the previous instruction:
:set ff=dos
or by converting the file to Unix format beforehand with the dos2unix command:
dos2unix -n in out
To remove empty lines from a file you might try with some specific commands such as:
:g/^$/d
Where g tells vim to execute a command only on the lines that match a regular expression, ^$ is said regular expression to match empty (blank) lines, and d is the delete command.
Bonus: if when you open the file you see a lot of ^M characters, it means you're editing it in Unix format but the file was created in DOS format. You can either try by telling vim to treat it as a DOS format before running the previous instruction:
:set ff=dos
or by converting the file to Unix format beforehand with the dos2unix command:
dos2unix -n in out
[Yum] Prevent packages from being installed or updated
Even though it is recommended to always run the latest software version for bug fixes and security purposes, sometimes the package maintainers might have gotten too many donations and slip up on the next release.
And if that release is a kernel release, things might get ugly. Luckily, you can prevent yum from installing or updating packages by adding the --exclude parameter to your commands:
yum --exclude=PACKAGE update
this will update the system but not the packages named PACKAGE. Its scope is limited to the single command, so a second yum update will not exclude them.
Eg to exclude kernel packages:
yum --exclude=kernel* update
To make the exclusion permanent, edit /etc/yum.conf and add a line:
exclude=PACKAGE
And if that release is a kernel release, things might get ugly. Luckily, you can prevent yum from installing or updating packages by adding the --exclude parameter to your commands:
yum --exclude=PACKAGE update
this will update the system but not the packages named PACKAGE. Its scope is limited to the single command, so a second yum update will not exclude them.
Eg to exclude kernel packages:
yum --exclude=kernel* update
To make the exclusion permanent, edit /etc/yum.conf and add a line:
exclude=PACKAGE
[Linux] Test RPM dependencies and installation without altering the system
So you already know how to do this with APT, but what if you're using an RPM-based distro?
Easily enough:
repoquery --requires --recursive --resolve PACKAGE_NAME
will check and list the package dependencies and:
rpm -ivh --test PACKAGE_NAME
will run a dry install which will show you what changes would take place without actually installing anything
Easily enough:
repoquery --requires --recursive --resolve PACKAGE_NAME
will check and list the package dependencies and:
rpm -ivh --test PACKAGE_NAME
will run a dry install which will show you what changes would take place without actually installing anything
[Fedora] VLC repository
In order to install VLC on Fedora, you'll need to add their repository to your sources.
Simply:
yum install --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
yum install vlc
If you find that you cannot play some media types, try installing these gstreamer plugins as well
yum install gstreamer-plugins-good gstreamer-plugins-bad gstreamer-plugins-ugly
Simply:
yum install --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
yum install vlc
If you find that you cannot play some media types, try installing these gstreamer plugins as well
yum install gstreamer-plugins-good gstreamer-plugins-bad gstreamer-plugins-ugly
[Yum] Add and manage custom repositories
In the latest Fedora versions, it is possible to add custom repositories to yum by creating a .repo file under the /etc/yum.repos.d/ folder following this structure:
[REPOSITORY_NAME]
baseurl=REPOSITORY_URL
enabled=1
gpgcheck=1
gpgkey=URL_TO_KEY
This will add and enable the repo available at REPOSITORY_URL signed by the GPG key found at URL_TO_KEY to your list.
Here's a couple of offical Fedora repos for Google Chrome and VirtualBox:
[REPOSITORY_NAME]
baseurl=REPOSITORY_URL
enabled=1
gpgcheck=1
gpgkey=URL_TO_KEY
This will add and enable the repo available at REPOSITORY_URL signed by the GPG key found at URL_TO_KEY to your list.
Here's a couple of offical Fedora repos for Google Chrome and VirtualBox:
[Yum] Fix rpmdb open failed error
Although not as good as other package managers, yum has come a long way since I last tried it.
However, due to me installing software using the rpm command and other graphical package managers (it is bad, seriously), I managed to screw up something causing it to complain with a "rpmdb open failed" error.
The fix is easy enough though:
rm -rf /var/lib/rpm/__db*
rpm --rebuilddb
yum clean all
yum update
However, due to me installing software using the rpm command and other graphical package managers (it is bad, seriously), I managed to screw up something causing it to complain with a "rpmdb open failed" error.
The fix is easy enough though:
rm -rf /var/lib/rpm/__db*
rpm --rebuilddb
yum clean all
yum update
[Fedora] [Gnome] Use delete key to delete files
For some reason, on Fedora 21 - and possibly earlier versions too - you're required to Ctrl + Delete to send a file to the trash instead of the plain old single key Delete.
This behaviour can be reverted back to the good old ways by editing the accels file under /home/USER/.config/nautilus/
Find the line:
and edit it to:
now if you log out and in again you're set.
This behaviour can be reverted back to the good old ways by editing the accels file under /home/USER/.config/nautilus/
Find the line:
;(gtk_accel_path "<Actions>/DirViewActions/Trash" "<Primary>Delete")
and edit it to:
(gtk_accel_path "<Actions>/DirViewActions/Trash" "Delete")
now if you log out and in again you're set.
[Eclipse] [ApacheDS] Crash soup_session_feature_detach on startup
Older Eclipse versions might incur in the 968064 bug - note that there exists multiple reports of that for other distros as well - which prevents the application from starting up.
Upgrading to a newer version should fix the issue but there's a simple workaround. Just add in the eclipse.ini file this parameter to the JVM options:
-Dorg.eclipse.swt.browser.DefaultType=mozilla
If using ApacheDS, the fix can be applied to the config/config.ini file by adding this line anywhere:
org.eclipse.swt.browser.DefaultType=mozilla
Upgrading to a newer version should fix the issue but there's a simple workaround. Just add in the eclipse.ini file this parameter to the JVM options:
-Dorg.eclipse.swt.browser.DefaultType=mozilla
If using ApacheDS, the fix can be applied to the config/config.ini file by adding this line anywhere:
org.eclipse.swt.browser.DefaultType=mozilla
[Linux] Compress and split file or directory
On Linux, it is possible to compress anything and split the resulting archive with the split command
split -b SIZE - FILENAME_
Note that the trailing underscore _ isn't required but helps organizing the file names.
For example, to create an archive with tar and chunk it in 1KB pieces:
tar cz myFile | split -b 1KiB - out.tgz_
To decompress it, simply recreate the file with cat first:
cat FILENAME_* | tar xz
es:
cat out.tgz_* | tar xz
split -b SIZE - FILENAME_
Note that the trailing underscore _ isn't required but helps organizing the file names.
For example, to create an archive with tar and chunk it in 1KB pieces:
tar cz myFile | split -b 1KiB - out.tgz_
To decompress it, simply recreate the file with cat first:
cat FILENAME_* | tar xz
es:
cat out.tgz_* | tar xz
Subscribe to:
Posts (Atom)