mardi 21 avril 2015

Unable to update value in database using AND operator in postgresql

I'm working on a Java swing application in which I'm trying to update postgresql database when user make any changes. There are four fields in the tables, If i try to update one column at a time it's working fine but when I use AND then It display following error message to me

Caught Exception:- org.postgresql.util.PSQLException: ERROR: argument of AND must be type boolean, not type character varying

Given below is code that I'm using to update table.

Connection con = DbConnection.getConnect();
Statement stmt = con.createStatement();
String sql = "UPDATE webservice_config set product = ? AND product_cat = ? AND stock = ? where ID=?;";
PreparedStatement pstInsert = con.prepareStatement(sql);
pstInsert.setString(1, product);
pstInsert.setString(2, productCat);
pstInsert.setString(3, stock);
pstInsert.setString(4, "1");
pstInsert.executeUpdate();
con.close();

Any Help

SQL union of two tables with different columns

I want to get one result set of rows back from two tables with different column names, one line per line in each table.

Result should look like this, blanks can be null, team_id in 2nd half comes from coach_id:

-----------------------------------------
player_id | team_id | score | improvement
-----------------------------------------
11          20         5
11          21         4
12          22         2
12          23         2
11          20                   5
11          21                   6
12          21                   5
13          23                   10

Here's the schema:

CREATE TABLE coaches
    (`id` int, `team_id` int)
;

INSERT INTO coaches
    (`id`, `team_id`)
VALUES
    (1, 20),
    (2, 21),
    (3, 22),
    (4, 23)
;

CREATE TABLE players
 (`id` int, `player_id` int);


INSERT INTO players
(`id`, `player_id`)
VALUES
(1,11),
(2,12),
(3,13),
(4,14)
;
CREATE TABLE games
    (`id` int, `player_id` int, `team_id` int, `score` int)
;

INSERT INTO games
    (`id`, `player_id`, `team_id`, `score`)
VALUES
    (1, 11, 20, 5),
    (2, 11, 21, 4),
    (3, 12, 22, 2),
    (4, 12, 23, 2)
;

CREATE TABLE sessions
    (`id` int, `player_id` int, `coach_id` int, `improvement` int)
;

INSERT INTO sessions
      (`id`, `player_id`, `coach_id`, `improvement`)
VALUES
    (1, 11, 1, 5),
    (2, 11, 2, 6),
    (3, 12, 2, 5),
    (4, 13, 4, 10)
;

Tried this, but it wasn't really close:

SELECT tweets.player_id
      ,tweets.team_id
      ,follows.coach_id 
FROM tweets FULL OUTER JOIN follows ON (1 = 0);

NullReferenceError with linq to postgressql

I have proyect that use linq to postgresql, I have the library and Its three dependencies.

But when I try to connect to de DB I get this error:

First chance of Exception of type ' System.NullReferenceException ' in linq2db.dll. Additional information: Object reference not set to an instance of an object. (Translated).

Stack trace:

in LinqToDB.Data.DataConnection..ctor(String configurationString) in i:\linq2db\Source\Data\DataConnection.cs:line 41 in LinqToDB.Data.DataConnection..ctor() in i:\linq2db\Source\Data\DataConnection.cs:line 24 in modelo.BDGestion.bdgestionDB..ctor() in d:\Proyectos\Proyectos VisualStudio\TimeSheets\TimeSheets\modelo\BDGestion.PostgreSQL.generated.cs:línea 44 en TimeSheets.SeleccionPersonal..ctor() in d:\Proyectos\Proyectos VisualStudio\TimeSheets\TimeSheets\SeleccionPersonal.cs:line 23

public partial class SeleccionPersonal : Form
    {
        public SeleccionPersonal()
        {
            InitializeComponent();
            try
            {
                using (var db = new bdgestionDB())
                {

                }
            }catch(Exception ex){
                MessageBox.Show(ex.ToString());
            }
        }
    }

I generated the classes using the template, the connection String is well formed and properly added to de connectionStrings. Checked it by me in this solution and by a friend in other computer by its own. If you need any information just ask.

Generated class:

public partial class bdgestionDB : LinqToDB.Data.DataConnection
{
    tables...

    public bdgestionDB()
    {
        InitDataContext();
    }

    public bdgestionDB(string configuration)
        : base(configuration)
    {
        InitDataContext();
    }

    partial void InitDataContext();
}

The App.config:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
  </startup>
  <connectionStrings>
    <add name="Postgres" connectionString="Server=thot-test;Port=5432;Database=bdgestion;UserId=test;Password=T15Pmc;Pooling=true;MinPoolSize=10;MaxPoolSize=100;Protocol=3;"/>
  </connectionStrings>
</configuration>

Which query plan is faster/better

I have two query plans for different queries that return the same results I'm wondering if someone can tell me which one is "better", and why.

SELECT * 
FROM bids order by (select ranking from users where users.id = runner_id) DESC limit 1


"Limit  (cost=17.62..17.62 rows=1 width=568)"
"  ->  Sort  (cost=17.62..17.62 rows=2 width=568)"
"        Sort Key: ((SubPlan 1))"
"        ->  Seq Scan on bids  (cost=0.00..17.61 rows=2 width=568)"
"              SubPlan 1"
"                ->  Index Scan using users_pkey on users  (cost=0.28..8.29 rows=1 width=4)"
"                      Index Cond: (id = bids.runner_id)"

Second statement and plan:

SELECT  "bids".* 
FROM "bids" inner join users u on bids.runner_id=u.id  ORDER BY u.ranking DESC LIMIT 1

"Limit  (cost=17.64..17.64 rows=1 width=572)"
"  ->  Sort  (cost=17.64..17.64 rows=2 width=572)"
"        Sort Key: u.ranking"
"        ->  Nested Loop  (cost=0.28..17.63 rows=2 width=572)"
"              ->  Seq Scan on bids  (cost=0.00..1.02 rows=2 width=568)"
"              ->  Index Scan using users_pkey on users u  (cost=0.28..8.29 rows=1 width=8)"
"                    Index Cond: (id = bids.runner_id)"

lundi 20 avril 2015

Multiple Rails projects with different databases for each

I'm new to Rails and after going through the Heroku tutorial I've set up a project using PostgresQL.

A friend also learning Rails followed a different tutorial which used SQLite as the database, which I've cloned.

When starting the rails server with the SQLite project, I'm continually getting an error suggesting that the project is trying to use PostgresQL and telling me to add gem pg to my Gemfile.

All I can seem to find online are various posts about migrating to PostgresQL from MySQL/SQLite/etc, but I am not looking to migrate.

My database.yml is as follows:

# SQLite version 3.x
#   gem install sqlite3
#
#   Ensure the SQLite 3 gem is defined in your Gemfile
#   gem 'sqlite3'
#
default: &default
  adapter: sqlite3
  pool: 5
  timeout: 5000

development:
  <<: *default
  database: db/development.sqlite3

# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
  <<: *default
  database: db/test.sqlite3

production:
  <<: *default
  database: db/production.sqlite3

The error reported in RubyMine is:

/usr/bin/ruby -e $stdout.sync=true;$stderr.sync=true;load($0=ARGV.shift) /Users/Ty/Rails/TSCLunch/web/bin/rails server -b 0.0.0.0 -p 3000 -e development
=> Booting WEBrick
=> Rails 4.2.0 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
Exiting
/Users/Ty/.gem/ruby/2.0.0/gems/activerecord-4.2.0/lib/active_record/connection_adapters/connection_specification.rb:177:in `rescue in spec': Specified 'postgresql' for database adapter, but the gem is not loaded. Add `gem 'pg'` to your Gemfile (and ensure its version is at the minimum required by ActiveRecord). (Gem::LoadError)
    from /Users/Ty/.gem/ruby/2.0.0/gems/activerecord-4.2.0/lib/active_record/connection_adapters/connection_specification.rb:174:in `spec'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activerecord-4.2.0/lib/active_record/connection_handling.rb:50:in `establish_connection'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activerecord-4.2.0/lib/active_record/railtie.rb:120:in `block (2 levels) in <class:Railtie>'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/lazy_load_hooks.rb:38:in `instance_eval'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/lazy_load_hooks.rb:38:in `execute_hook'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/lazy_load_hooks.rb:28:in `block in on_load'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/lazy_load_hooks.rb:27:in `each'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/lazy_load_hooks.rb:27:in `on_load'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activerecord-4.2.0/lib/active_record/railtie.rb:116:in `block in <class:Railtie>'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:30:in `instance_exec'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:30:in `run'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:55:in `block in run_initializers'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:150:in `block in tsort_each'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:183:in `block (2 levels) in each_strongly_connected_component'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:219:in `each_strongly_connected_component_from'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:182:in `block in each_strongly_connected_component'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:180:in `each'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:180:in `each_strongly_connected_component'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tsort.rb:148:in `tsort_each'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/initializable.rb:54:in `run_initializers'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/application.rb:352:in `initialize!'
    from /Users/Ty/Rails/TSCLunch/web/config/environment.rb:5:in `<top (required)>'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `require'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `block in require'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:240:in `load_dependency'
    from /Users/Ty/.gem/ruby/2.0.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `require'
    from /Users/Ty/Rails/TSCLunch/web/config.ru:3:in `block in <main>'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/builder.rb:55:in `instance_eval'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/builder.rb:55:in `initialize'
    from /Users/Ty/Rails/TSCLunch/web/config.ru:in `new'
    from /Users/Ty/Rails/TSCLunch/web/config.ru:in `<main>'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/builder.rb:49:in `eval'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/builder.rb:49:in `new_from_string'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/builder.rb:40:in `parse_file'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/server.rb:299:in `build_app_and_options_from_config'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/server.rb:208:in `app'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/server.rb:61:in `app'
    from /Users/Ty/.gem/ruby/2.0.0/gems/rack-1.6.0/lib/rack/server.rb:336:in `wrapped_app'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/server.rb:139:in `log_to_stdout'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/server.rb:78:in `start'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:80:in `block in server'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:75:in `tap'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:75:in `server'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
    from /Users/Ty/.gem/ruby/2.0.0/gems/railties-4.2.0/lib/rails/commands.rb:17:in `<top (required)>'
    from /Users/Ty/Rails/TSCLunch/web/bin/rails:4:in `require'
    from /Users/Ty/Rails/TSCLunch/web/bin/rails:4:in `<top (required)>'
    from -e:1:in `load'
    from -e:1:in `<main>'

Process finished with exit code 1

Order by foreign key in activerecord: without a join?

I want to expand this question. order by foreign key in activerecord

I'm trying to order a set of records based on a value in a really large table. When I use join, it brings all the "other" records data into the objects.. As join should..

#table users  30+ columns
#table bids  5 columns
record = Bid.find(:all,:joins=>:users, :order=>'users.ranking DESC' ).first

Now record holds 35 fields..

Is there a way to do this without the join?

Here's my thinking..
With the join I get this query

SELECT * FROM "bids" 
left join users on runner_id = users.id  
ORDER BY ranking LIMIT 1

Now I can add a select to the code so I don't get the full user table, but putting a select in a scope is dangerous IMHO.

When I write sql by hand.

SELECT * FROM bids 
order by (select users.ranking from users where users.id = runner_id) DESC
limit 1

I believe this is a faster query, based on the "explain" it seems simpler.
More important than speed though is that the second method doesn't have the 30 extra fields.
If I build in a custom select inside the scope, it could explode other searches on the object if they too have custom selects (there can be only one)

Solr AutoCommit not working with Postgresql

I am using Solr 4.10.0 with PostgreSql 9.3. I am able to configure my solr core properly using data-config.xml and search through the database different tables. However, I am not able to setup the autoCommit feature. Whenever any row gets added in the table, I expect them to start appearing in the results after the maxTime (1 minute) but that doesn't happen. I have to explicitly rebuild the index by doing a full data-import and then everything works fine.

My solrconfig.xml is:

  <updateHandler class="solr.DirectUpdateHandler2">

     <autoCommit>
       <maxTime>60000</maxTime>
       <openSearcher>true</openSearcher>
     </autoCommit>

     <autoSoftCommit>
       <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
     </autoSoftCommit>

  </updateHandler>

Is there something extra needs to be done for using autoCommit here? I checked my log files as well but there is no error / exception. What am I missing?

How do I keep all escape characters in a SQL string for querying a POSTGRES DB using R?

I am using R to query data from a Postgres database. The query I created in pgAdmin3 has lots of escape characters in it, and I'd to pull in the entire SQL string without having to deal with escape characters again, something like Python's triple quotes """.

SELECT brand
,regexp_replace(upper(hybridtype), '\(.*\)|\/|-|TM|\s','','g') as hybrid
FROM seeds

How can I get that SQL text into R retaining all the characters?

Postgis - geography data type via JDBC

I am working on a web project, using postgresql 9.3 postgis 2.1.

I have a column of type geography in the table, it just store a point.

Now I need to insert / select the type via JDBC with Java object.

I have read postgis manual, but it provides little information.

My questions is:

  1. In java pojo model class, what Java type I should use for the column?
  2. How do I write the insert sql?
  3. How do I write the select sql to retrieve the value and put into Java object?
  4. If I am using mybatis, does it effect answers to above questions?

Error when modifying Postgres JSON field in Rails 3.2

I'm trying to use the JSON (Postgres 9.3, so not JSONB) field type in my Rails 3.2 application.

I've created the field with the following migration without a problem:

def change
  add_column :blocks, :meta, :json
end

However, when I'm trying to modify the field like this:

b.meta = {name: "John"}
b.save

I'm getting the following error:

ActiveRecord::StatementInvalid: PGError: ERROR:  invalid input syntax for type json
LINE 1: UPDATE "blocks" SET "meta" = '---

I'm not even sure if JSON type is supported in Rails 3.2 but I've seen some posts talking about it as possible (though no details on how it works).

Scalike JDBC insert Geometry

In psql I can insert a type of GEOMETRY with:

insert into table (geo) 
values (ST_SetSRID(ST_MakePoint(-28.095519, 153.459987), 4326));

Given I have a type Point(lat: Double, long: Double), can I do an insert like this using scalikejdbc?

Creating a tree template in SQL

I am working with a database used to store information about entities in the form of a tree of components. For example, it might have a camry object with children air-conditioner and engine. The engine might have pistons as children, and air-con have vents as children.

The idea is that the user would custom-create something like this as a 'template' which would then be used to instantiate the camry 'tree' as needed. So the user might first create the template, and then use it to add ten of those camry trees to a workshop, storing unique data against each by selecting 'add new car', selecting camry, and then picking a name.

How would you store such a construction in a database, and is there any easy way to instantiate a tree like that?

I want to replace a string with another string from a list depending on the value

Let's say I have this string: '2015/4/21 (Tuesday)'.

I want to replace 'Tuesday' with another string, for example: cat.

The result should be: '2015/4/21 (cat)'.

But I also want it to be dynamic, if it's Tuesday, then cat. If it's Monday, then it's dog, etc.

How do I do this in PostgreSQL 8.4? Thanks!

There is a similar post: postgresql - replace all instances of a string within text field

But mine needs to replace something dynamic depending on the day and that post replaces a known value.

Problems while Using postgres_fdw

I m getting some problem while using postgres_fdw.

    CREATE SERVER foreign_server
    FOREIGN DATA WRAPPER postgres_fdw
    OPTIONS (host '192.162.0.1', port '5432', dbname 'Test');



    CREATE USER MAPPING FOR postgres
    SERVER foreign_server
    OPTIONS (user 'foreign_user', password 'password');


    CREATE FOREIGN TABLE foreign_table (
    id serial NOT NULL,
    data text)SERVER foreign_server
    OPTIONS (schema_name 'public', table_name 'employee');


    select * from employee  where user ='foreign_user'

Now I can see entries are made to pg_foreign_data_wrapper,pg_foreign_server,pg_foreign_table tables.
But How do I access employee table of remote system.

I mean select * from employee where user ='foreign_user' doesn't give any result.Though it has data in Employee table of remote system.

please do help
Regards

Insert string array into postgres text array field

I'm trying to make an article taggable.

Article table:

type Article struct {
  ID int64
  Body string
  Tags string
}

Preparing values:

tags := r.FormValue("tags")
tagArray := fmt.Sprintf("%q", strings.Split(tags, ", ")) // How do I make use of this?

t := Article{
    Body: "this is a post",
    Tags: `{"apple", "orange"}`,    // I have to hard code this for this to work.
}
if err := t.Insert(Db); err != nil {
   // Error handling
}

Database query:

func (t *Article) Insert(db *sqlx.DB) error {
    nstmt, err := db.PrepareNamed(`INSERT INTO articles
    (body, tags)
    VALUES (:body, :tags)
    RETURNING *;
    `)
    if err != nil {
        return err
    }
    defer nstmt.Close()

    err = nstmt.QueryRow(t).StructScan(t)
    if err, ok := err.(*pq.Error); ok {
        return err
    }
    return err
}

Postgres setup for tags field:

tags character varying(255)[] DEFAULT '{}',

It seems like I have to hard code the value for tags for it to work. Otherwise I would get errors like these:

pq: missing dimension value
OR
pq: array value must start with "{" or dimension information

How do I make use of tagArray?

Helpful reference: http://ift.tt/1yK423Y

Postgres 9.3.5 not accepting VARCHAR(MAX) GORM

I have a grails application that connects to a Postgres Plus Advanced Server. I have a domain in my grails application that has a column of sqlType: VARCHAR(MAX). When the table is created by GORM, the column with the sqlType: VARCHAR(MAX) were created as Text. But when I changed my database to use PostgreSQL 9.3.5, it fails to create the table and says that it is encountering syntax error. Is that one of the difference between Postgres Plus Advanced Server and PostgreSQL 9.3.5? If so, how can I handle this so I won't have to change codes since there are a lot of domains that is using sqlType: VARCHAR(MAX) in my application.

Thanks

Speed up Postgresql insert performance from vb.net

I have a Vb.Net app (VS2010) as well as Postgres 9.3 and psqlodbc version 9.3.4. I interact with my Postgres database via ODBCConnection and the PostgreSQL odbc driver.

Doing simple transactional bulk inserts into a database table (no indexes or other high tech stuff, just plain old records and one serial column as the primary key) is slow even when conducted on my local machine (that is, app writes data to local postgres database with no network involved). I have seen speeds of 1000-1500 rows per seconds. My hardware is Win7 64 bit, a Sata 6gbs hard drive, 8GB of ram.

As per the post by Maxim Egorushkin on How to speed up insertion performance in PostgreSQL, he suggested using:

SQLSetConnectAttr(conn, SQL_ATTR_AUTOCOMMIT, reinterpret_cast(SQL_AUTOCOMMIT_OFF), 0);

to prevent autocommit from occurring. I am wondering how you would call this subroutine from .net since it appears to be part of the WIN API and thus the "conn" variable is likely a handle to a non .Net object.

I am confident this approach or some other one should be able to speed things up: apparently the non-free PGNP ODBC driver for postgres can do 10s of 1000s of inserts per second.

Input on how to disable autocommit is much appreciated.

Boolean value not consistent w/PostgreSQL

When I insert a boolean value into one table its value is true/false (primitives?), but in all other tables it is t and f as strings. Inserting the values in the same way. Why would this happen?

Using PostgreSQL with Ruby on Rails 4.

Why is my Rails App defaulting to SQLite3 instead of PSQL on digitalocean?

When I try to create a rails app on DigitalOcean (Ubuntu/Nginx/passenger) it always defaults to sqlite3 and I get an error message in the browser:
SQLite3::CantOpenException But when I use WebBrick locally it works fine running on PostGres. What could be causing this and how can I get my app to use PostGres? It's installed on DigitalOcean and I can migrate and seed the database in my app, it just seems to want to use sqlite3. Any ideas?

How to replace all subsets of characters based on values of other tables in pl/pgsql?

I've been doing some research on how to replace a subset of string of characters of a single row base on the values of the columns of other rows, but was not able to do so since the update are only for the first row values of the other table. So I'm planning to insert this in a loop in a plpsql function.

Here are the snippet of my tables. Main table:

 Table "public.tbl_main"
        Column         |  Type  | Modifiers 
-----------------------+--------+-----------
 maptarget             | text   | 
 expression            | text   | 


 maptarget |                 expression
-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 43194-0   | 363787002:70434600=(386053000:704347000=(414237002:704320005=259470008,704318007=118539007,704319004=50863008),704327008=122592007,246501002=703690001,370132008=30766002)

Look-up table:

Table "public.tbl_values"
        Column         |  Type  | Modifiers 
-----------------------+--------+-----------
 conceptid             | bigint | 
 term                  | text   |

 conceptid |                   term                   
-----------+------------------------------------------
 386053000 | Patient evaluation procedure (procedure)
 363787002 | Observable entity (observable entity)
 704347000 | Observes (attribute)
 704320005 | Towards (attribute)
 704318007 | Property type (attribute)

I want to create a function that will replace all numeric values in the tbl_main.expression columns with their corresponding tbl_values.term using the tbl_values.conceptid as the link to each numeric values in the expression string.

I'm stuck currently in the looping part since I'm a newbie in LOOP of plpgsql. Here is the rough draft of my function.

--create first a test table
drop table if exists tbl_test;
create table tbl_test as select * from tbl_main limit 1;
--

create or replace function test () 
 RETURNS SETOF tbl_main
 LANGUAGE plpgsql
AS $function$
declare
 resultItem tbl_main;
 v_mapTarget text;
 v_expression text;
 ctr int;
begin
  v_mapTarget:='';
  v_expression:='';
  ctr:=1;

  for resultItem in (select * from tbl_test) loop
     v_mapTarget:=resultItem.mapTarget;
     select into v_expression expression from ee;
     raise notice 'parameter used: %',v_mapTarget;
     raise notice 'current expression: %',v_expression;

     update ee set expression=replace(v_expression, new_exp::text, term) from (select new_exp::text, term from tbl_values offset ctr limit 1) b ; 
     ctr:=ctr+1;
     raise notice 'counter: %', ctr;
     v_expression:= (select expression from ee);
     resultItem.expression:= v_expression;
     raise notice 'current expression: %',v_expression;
return next resultItem;
 end loop;
 return;
 end;
$function$;

Any further information will be much appreciated.
My Postgres version:

PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit

The storage space in my hard drive is reduced dramatically on Red Hat, can no longer insert into database [on hold]

On these hard drive I have my PHP develop, it works with Postgres, but I no execute any inserts. The OS is Red Hat 7 Enterprise version.

Can I batch incrementing values in order to reduce database writes?

My web app uses Django on Heroku.

The app has two components: a banner serving API and a reporting API.

Banners served by the app contain an ajax call to the app's reporting API. This call is made once per impression. The purpose of this is to record how many impressions each banner gets each day.

Reporting API abridged source:

def report_v1(request):
    '''
    /report/v1/?cid=

    - cid= the ID of the banner
    '''

    creative = get_creative(creative_id = request.GET.get('cid'))
    if not creative:
        return HttpResponseNotFound("404 creative with that ID not found")

    day = timezone.now()

    fact, created = CreativeFact.objects.get_or_create(
        day = day,
        app = app,
        creative = creative,
        defaults = {
                'impression_count': 0,
            }
        )

    fact.impression_count += 1

    response = HttpResponse("200 OK")
    response["Access-Control-Allow-Origin"] = "*"

    return response

My problem: Is there any way to avoid writing to the database on every single impression? I'm currently tracking hundreds of thousands of impressions this way, and expect to track millions soon, and I think this is inefficient.

PHP PostgreSQL Persistent Connection

I have a Ubuntu server that runs a daemon snmptrapd to handle incoming SNMP traps. This loads a php script for each trap, there are thousands of devices that we manage so there are always lots of traps coming in, and this is taking a lot of resource to handle. Because this script has many queries and every time it needs to create a DB connection. So I need to optimize this script.

Is it possible to have some sort of daemon with persistent database connections to PSQL? So my script can use that connection each time when it runs.

I'm new to this subject and please go easy on me. Any help will be much appreciated. :)

Postgresql using date interval and SELECT statement

I have a table with 3 columns - a timestamp, a real, and a string. The String columns holds various time frequency values (like Minutes, Hours, Days). I want to take the value in my timestamp column and add to it the values in the other columns. For example I'd like to do something like

select timestamp '2015-04-20 01:00' + interval '2 hours'

if I had a table with the following columns and values

'action_start_time' =  '2015-04-20 01:00'

'rate' = 2

'frequency' = hours

Create timestamp index from JSON on PostgreSQL

I have a table on PostgreSQL with a field named data that is jsonb with a lot of objects, I want to make an index to speed up the queries. I'm using few rows to test the data (just 15 rows) but I don't want to have problems with the queries in the future. I'm getting data from the Twitter API, so with a week I get around 10gb of data.
If I make the normal index

CREATE INDEX ON tweet((data->>'created_at'));

I get a text index, if I make:

Create index on tweet((CAST(data->>'created_at' AS timestamp)));

I get

ERROR: functions in index expression must be marked IMMUTABLE

I've tried to make it "inmutable" setting the timezone with

date_trunc('seconds', CAST(data->>'created_at' AS timestamp) at time zone 'GMT')

but I'm still getting the "immutable" error. So, How can I accomplish a timestamp index from a JSON? I know that I could make a simple column with the date because probably it will remain constant in the time, but I want to learn how to do that.

DB Defaults to SQLite3 on DigitalOcean

I'm having trouble setting up my rails app on digital ocean (w/ubuntu&nginx&passenger). My app won't connect to the postgresql db on DO and seems to default to SQLite3 for some reason. In my local environment everything is fine and connects to postgresql but at DO this error pops up in the browser: "SQLite3::CantOpenException". I'm sort of new to configuring environments, but I added this to the .bashrc file, thinking maybe this is why it's not working, but the following line of code does nothing: export APP_ENV="production" I'm not sure what I'm doing wrong, when I created the app I even did it like this: rails new myproject -T -d postgresql I also read a tutorial that advised me to just put my password into my app, but that seems like a completely ridiculous thing to do. I'm pretty sure I need to set up environment variables but I thought that this is what the postgresql at the end of "rails new myproject -T -d postgresql" does...I'm lost.

Changing a Column's Data Type using Grails db-migration plugin

I keep hitting this MigrationFailedException

whilst attempting to convert a boolean to integer. I know that at SQL level I could use USING (col_name::integer) to instruct Postgresql how to cast. I wonder if there is any syntax that I could use on my changelog script to prevent getting the aforementioned exception.

The version of the plugin I am using is 1.4.0 and I am favouring groovy over XML.

Connecting PostrgeSQL with R

I'm trying to make a connection to PostgreSQL database in R.

I'm writing this

drv <- dbDriver("PostgreSQL")

But in result i have an error:

Error: Couldn't find driver PostgreSQL. Looked in:
* global namespace
* in package called PostgreSQL
* in package called RPostgreSQL

What am i doing wrong?

Why is putenv() needed on an already defined environment variable?

When php is used as an apache module, an environment variable coming from an apache SetEnv directive is available to php's getenv(), but it does not appear to be available to C extensions through stdlib's getenv(). At least it happens with the pgsql module.

If the variable is reinstantiated with the php code:

putenv("varname=".getenv("varname"));

then it becomes available to the extension's code.

The question: why is that reinstantiation necessary? How is the core php environment distinct from the "standard" (stdlib) environment?

This occurs with: PHP Version 5.3.10-1ubuntu3.17 in Ubuntu 12.04, as an apache module. When run from the command line, the above workaround is not necessary. From this other question: Using .pgpass from Apache libphp5.so it appears that this workaround is also necessary for php-5.4 under FreeBSD so it's not just Ubuntu or php-5.3.

It doesn't depend on variables_order having E in it. I've tried both EGPCS and GPCS, and $_ENV is not populated when E is not there, as expected, but that doesn't change the result of getenv(), as documented, or apparently the result of stdlib's getenv() from inside extensions.


Demo of the problem with the pgsql module. It's built on top of the libpq shared library written in C, which calls getenv() on a handful of optional PG* environment variables.

In apache configuration file, under a <VirtualHost>, I'm setting this to make connection attempts fail:

SetEnv PGHOST doesnotexist

and not specifying a host in the pg_connect call, so PGHOST must be taken when present.

First try:

$v=getenv("PGHOST");
echo "PGHOST=$v\n";

$cnx=pg_connect("user=daniel");
if ($cnx) {
   echo "Connection is successful.";
} 

Result:

PGHOST=doesnotexist
Connection is successful.

So PGHOST is getting ignored, despite being in the environment.

Second try, now putting again PGHOST into the environment even though it's already there:

$v=getenv("PGHOST");
echo "PGHOST=$v\n";
putenv("PGHOST=".getenv("PGHOST"));
$cnx=pg_connect("user=daniel");
if ($cnx) {
   echo "Connection is successful.";
} 

Result (failure to connect to the specified host, as expected):

PGHOST=doesnotexist
Warning: pg_connect(): Unable to connect to PostgreSQL server:
could not translate host name "doesnotexist" to address:
Name or service not known in /var/www/test/pgtest2.php on line 8

CentOS 6: Not Allowing Connection to Port 5432

I have a CentOS 6 as VM on a Windows box. I can use Putty to connect to the machine (on port 22) but can't connect via client applications (pgAdmin) or via telnet mytargetvmip 5432 to the postgresql (I did modify the pg_hba.conf file).

Here is what I have done: I changed the ip tables file and then do a service iptables restart command but to no good.

Please note, in the given iptables file, I had also tried iptables -I INPUT 1 -m tcp -p tcp --dport 5432 -j ACCEPT just before COMMIT but no luck. I want to be able to connect via any remote IP.

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5432 -j ACCEPT
 COMMIT

Thanks.

does CakePHPs Containable-behavior support custom expressions as conditions?

Community,

I'm currently facing an issue with the containable-behavior setting conditions based on the datasources expression-builder. I'm using CakePHP 2.6.2 with a PostgreSQL database.

What works so far:

I wrote a behavior that dynamically adds conditions to find-operations to restrict the results based on a privileges table. Im using subqueries with the buildstatement() and expression() functions provided by cake. I followed this article from the CakeBook:

http://ift.tt/wd8e6d

Here is a simplyfied code-snippet, in fact its two OR-statements:

$conditionsSubQueryRecord = array(
  'Privilege.objecttable' => $model->table,
  'Privilege.objectid = '.$model->alias.'.'.$model->primaryKey,
  'Privilege.read' => true,
  'Privilege.id' => $this->recordPermissions
);
$dsPrivilege = $this->privilegeModel->getDataSource();
$subQueryRecordPrivs = $dsPrivilege->buildStatement(
array(
  'fields'     => array('"'.$this->privilegeModel->alias.'"."id"'),
  'table'      => $dsPrivilege->fullTableName($this->privilegeModel),
  'alias'      => $this->privilegeModel->alias,
  'limit'      => null,
  'offset'     => null,
  'joins'      => array(),
  'conditions' => $conditionsSubQueryRecord,
  'order'      => null,
  'group'      => null
),
$this->privilegeModel
);
$subQueryRecordPrivs = ' EXISTS (' . $subQueryRecordPrivs . ') ';
$subQueryRecordPrivsExpression = $dsPrivilege->expression($subQueryRecordPrivs);

I'm adding the statement to my condition array then in my behaviors beforeFind()-hook. This works all very well so far. The condition is added, the results are filtered.

The conditions are ignored for my contained models:

My problem is now to use this condition on contained models. I wrote an recursive algorithm that walks along all the contained modelsand if the model actsAs my behavior I am attaching the same conditions to its conditions-array. But when I execute my search, the condition is ignored on the contained models and only attached to the primary model.

This is the complete condition string I'm executing:

array(
    'conditions' => array(
        'Requestinstance.id' => (int) 4,
        (int) 0 => object(stdClass) {
            type => 'expression'
            value => ' EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS "Privilege"   WHERE "Privilege"."objecttable" = 'requestinstances' AND "Privilege"."objectid" = "Requestinstance"."id" AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" = (8))  OR  EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS "Privilege"   WHERE "Privilege"."objecttable" = 'requestinstances' AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" IN (7, 13, 6, 9, 10, 12) AND "Privilege"."objectid" IS NULL) '
        }
    ),
    'fields' => null,
    'joins' => array(),
    'limit' => (int) 1,
    'offset' => null,
    'order' => array(
        (int) 0 => null
    ),
    'page' => (int) 1,
    'group' => null,
    'callbacks' => true,
    'contain' => array(
        'Requesttype' => array(
            'Steptype' => array(
                'order' => array(
                    (int) 0 => 'RequesttypesSteptype.phase ASC'
                ),
                'conditions' => object(stdClass) {
                    type => 'expression'
                    value => ' EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS "Privilege"   WHERE "Privilege"."objecttable" = 'steptypes' AND "Privilege"."objectid" = "Steptype"."id" AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" = (8))  OR  EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS "Privilege"   WHERE "Privilege"."objecttable" = 'steptypes' AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" IN (7, 13, 6, 9, 10, 12) AND "Privilege"."objectid" IS NULL) '
                }
            ),
            (int) 0 => 'RequesttypesSteptype',
            'conditions' => object(stdClass) {
                type => 'expression'
                value => ' EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS "Privilege"   WHERE "Privilege"."objecttable" = 'requesttypes' AND "Privilege"."objectid" = "Requesttype"."id" AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" = (8))  OR  EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS "Privilege"   WHERE "Privilege"."objecttable" = 'requesttypes' AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" IN (7, 13, 6, 9, 10, 12) AND "Privilege"."objectid" IS NULL) '
            }
        ),
        'Stepinstance' => array(
            (int) 0 => 'Steptype',
            (int) 1 => 'Stepdatainstance',
            (int) 2 => 'Sectioninstance'
        ),
        'Requestdatainstance' => array(),
        'Taskinstance' => array()
    ),
    'recursive' => (int) 2
)

As you can see, the condition was correctly added to some of the contained models. However, the executed SQL-query, i.e. for the "Steptype"-Model, is generated without the condition:

SELECT "Steptype"."id" AS "Steptype__id", "Steptype"."name" AS "Steptype__name", "Steptype"."description" AS "Steptype__description", "Steptype"."subscribe" AS "Steptype__subscribe", "RequesttypesSteptype"."id" AS "RequesttypesSteptype__id", "RequesttypesSteptype"."phase" AS "RequesttypesSteptype__phase", "RequesttypesSteptype"."endsphase" AS "RequesttypesSteptype__endsphase", "RequesttypesSteptype"."endsrequest" AS "RequesttypesSteptype__endsrequest", "RequesttypesSteptype"."usertype_id" AS "RequesttypesSteptype__usertype_id", "RequesttypesSteptype"."requesttype_id" AS "RequesttypesSteptype__requesttype_id", "RequesttypesSteptype"."steptype_id" AS "RequesttypesSteptype__steptype_id" FROM "core"."steptypes" AS "Steptype" JOIN "core"."requesttypes_steptypes" AS "RequesttypesSteptype" ON ("RequesttypesSteptype"."requesttype_id" = 6 AND "RequesttypesSteptype"."steptype_id" = "Steptype"."id") ORDER BY "RequesttypesSteptype"."phase" ASC

Direct use of the buildStatement does not work either

I also tried to use the statement itself directly, without building an expression from it. This actually creates exactly the SQL-query I want to have, but does not add the quotes of the table alias in the FROM-clause correctly and therefore causes postgreSQL to throw an error:

SELECT "Requestinstance"."id" AS "Requestinstance__id", "Requestinstance"."user_id" AS "Requestinstance__user_id", "Requestinstance"."created" AS "Requestinstance__created", "Requestinstance"."requesttype_id" AS "Requestinstance__requesttype_id", "Requestinstance"."currentphase" AS "Requestinstance__currentphase", "Requestinstance"."selfsolving" AS "Requestinstance__selfsolving", "User"."username" AS "User__username", "User"."id" AS "User__id", "User"."company_id" AS "User__company_id", "User"."usertype_id" AS "User__usertype_id", "Requesttype"."id" AS "Requesttype__id", "Requesttype"."name" AS "Requesttype__name", "Requesttype"."subtitle" AS "Requesttype__subtitle", "Requesttype"."description" AS "Requesttype__description", "Requesttype"."order" AS "Requesttype__order", "Requesttype"."selfsolving" AS "Requesttype__selfsolving" FROM "core"."requestinstances" AS "Requestinstance" LEFT JOIN "core"."users" AS "User" ON ("Requestinstance"."user_id" = "User"."id") LEFT JOIN "core"."requesttypes" AS "Requesttype" ON ("Requestinstance"."requesttype_id" = "Requesttype"."id") WHERE EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS Privilege WHERE "Privilege"."objecttable" = 'requestinstances' AND "Privilege"."objectid" = "Requestinstance"."id" AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" = (8)) OR EXISTS (SELECT "Privilege"."id" FROM "core"."privileges" AS Privilege WHERE "Privilege"."objecttable" = 'requestinstances' AND "Privilege"."read" = 'TRUE' AND "Privilege"."id" IN (7, 13, 6, 9, 10, 12) AND "Privilege"."objectid" IS NULL) LIMIT 1 

Adding the quotes manually to the alias-string while building the statement does not help either, since the framework strips the quotes.

So finally my question(s):

Does anybody know, if the containable-behavior supports expressions at all? I already digged into the DboSource, PdoSource and Postgresql-datasource but could not find anything wrong here. The Containable behavior looks

Or is there another way I could acchieve what I want?

I'm glad for any help in this matter! Thanks in advance!

"FATAL: no pg_hba.conf entry for host" but I can connect trough pgAdmin

I'm having a little trouble with my dev environment for a CakePHP based project. The thing is, CakePHP can connect in production, but if I run trough XAMPP on my PC, then throws me a FATAL: no pg_hba.conf entry for host "myexternalip", user "dbuser", database "dbname", SSL off

but using the same parameters I can connect from my PC trough pgAdmin.

I've tried some googling tips that I've found like copy the libpq.dll from c:\xampp\php to c:\xampp\apache\bin but, no luck, had checked pnp.ini and extension=php_pdo_pgsql.dll line is uncommented, had tried reboot apache several times but nothing changes

The project is hosted under heroku platform and my /app/config/database.php looks like

public $production = array(
    'datasource' => 'Database/Postgres',
    'persistent' => false,
    'host' => 'anywhere.at.amazonaws.com',
    'login' => 'dbuser',
    'password' => 'dbpwd',
    'database' => 'dbname',
    'encoding' => 'utf8',
);

here's the Stack Trace

- CORE\Cake\Model\Datasource\DboSource.php line 260 → Postgres->connect()
- CORE\Cake\Model\ConnectionManager.php line 105 → DboSource->__construct(array)
- CORE\Cake\Model\Model.php line 3613 → ConnectionManager::getDataSource(string)
- CORE\Cake\Model\Model.php line 1155 → Model->setDataSource(string)
- CORE\Cake\Model\Model.php line 3640 → Model->setSource(string)
- CORE\Cake\Model\Model.php line 827 → Model->getDataSource()

Can anyone give me another hint that could help?

How can I replace a column value with a name in an SQL query?

I am running this query:

SELECT type, COUNT(type) FROM "table" where id = 8 GROUP BY type;

With this result:

6 3814 8 341 5 328

I'd like to have something like this, where I can specify names:

Arbitrary Name 3814 Other name 341 Test Name 328

Instead of the type column listing 6, how can I get it to have a custom name like Test Column 6, or Fun Column 5? I'm using postgres.

Rails 4: Querying a cumulative sum of a boolean column per day

I've been attempting to get a cumulative sum of a boolean column per day. I've tried different ways using either #sum and looked up #distinct too (by looking at other posts on stackoverflow) but haven't been able to successfully implement either of those methods in a way I'd like my hash to look.

Currently, this is the query I have:

Device.where(boxed: true).group('date(updated_at)').count

Which gives me this:

{Fri, 17 Apr 2015=>48, Sat, 18 Apr 2015=>44, Sun, 19 Apr 2015=>5, Mon, 20 Apr 2015=>48}

But I would like it to look like this:

{Fri, 17 Apr 2015=>48, Sat, 18 Apr 2015=>92, Sun, 19 Apr 2015=>97, Mon, 20 Apr 2015=>125}

I am using Rails 4 and postgres. Any help would be appreciated.

PG alternative to grouping?

In development on sqlite this works a charm:

@human = Human.joins(:human_logins).group("human_logins.human_id").order("count(human_logins.human_id) ASC")

Though, in production on Heroku and using PG, it crashes and I get the following error:

 ActionView::Template::Error (PG::GroupingError: ERROR:  column "humans.id" must appear in the GROUP BY clause or be used in an aggregate function

Guessing it's just a compatibility problem, any insight on how I can alter this so that it works nicely in production would be awesome.

Migrate oracle sql query to PostgreSql

I have a simple Oracle query:

SELECT SEQ_01.NEXTVAL FROM DUAL

I want write this query in SQL standard for run it in both DB

Filter rows if array contains value in a given range

I have a PostgreSQL table with a column timeslot which is an int array. I need to make a query that returns rows whose timeslot contains at least one number within a specified range. Alternatively, it could return rows whose timeslot contains values greater than 0. Right now, my query looks like this:

SELECT * FROM shows WHERE timeslot @> ARRAY[167], which will only return items from shows whose timeslot contains 167, when what I really need is something like:

SELECT * FROM shows WHERE timeslot @> 0-167, or

SELECT * FROM shows WHERE timeslot @> >0

Heroku transfer db from one app to another

I need to transfer db from app_1 to app_2

I created backup on app_1

Then ran:

heroku pg:backups restore HEROKU_POSTGRESQL_COLOR --app app_2 heroku pgbackups:url --app app_1

HEROKU_POSTGRESQL_COLOR = database URL for app_2

Then I get:

 !    `pg:backups` is not a heroku command.
 !    Perhaps you meant `pgbackups`.
 !    See `heroku help` for a list of available commands.

So I ran:

heroku pgbackups:restore HEROKU_POSTGRESQL_COLOR --app app_2 heroku pgbackups:url --app app_1

Then I get the following:

!    WARNING: Destructive Action
!    This command will affect the app: app_2
!    To proceed, type "app_2" or re-run this command with --confirm app_2

So I confirmed with:

> app_2
 !    Please add the pgbackups addon first via:
 !    heroku addons:add pgbackups

So then I ran: heroku addons:add pgbackups --app app_2

Adding pgbackups on app_2... failed
 !    Add-on plan not found.

Is there a way around this issue? any help would be greatly appreciated!

SQL Update Loop Assistance

I am having issues working out an SQL function in Postgresql. I have managed to get this done in python but it takes a very long time on a table with millions of records.

What I have is a 'example_table' that is structured as follows with data that resembles the sample below:

Example Table
|id |version |valid_from          |valid_to              |time_valid
1  | 1       |2010-03-21 19:00:00 | 2010-03-21 19:00:00   | NULL
1   | 2       |2011-02-02 09:00:00 | 2011-02-02 09:00:00  | NULL
1   | 3       |2012-04-20 15:00:00 | 2012-04-20 15:00:00  | NULL
2   | 1       |2012-07-02 04:00:00 | 2012-07-02 04:00:00  | NULL
3   | 1       |2011-05-05 05:00:00 | 2011-05-05 05:00:00  | NULL`

As you can see I have 3 records with id "1" and each are a corresponding version (i.e, 1:3 in this case)

I would like to update the versions 2 and 1 by setting 'valid_to' equal to the 'valid_from' value in later version.

Updated Table
|id |version |valid_from          |valid_to              |time_valid
1  | 1        |2010-03-21 19:00:00 | **2011-02-02 09:00:00**   | **Some Time**
1   | 2       |**2011-02-02 09:00:00** | **2012-04-20 15:00:00**  | **Some Time**
1   | 3       |**2012-04-20 15:00:00** | 2012-04-20 15:00:00  | NULL
2   | 1       |2012-07-02 04:00:00 | 2012-07-02 04:00:00  | NULL
3   | 1       |2011-05-05 05:00:00 | 2011-05-05 05:00:00  | NULL

Some records will have many versions while others may not have any (only one). It would also be convenient to calculate a time_valid field at the same time, which I assume is done by just subtracting the valid_to from the valid_from timestamps. Again I have millions upon millions of records and multiple tables I need to to this to - so faster is indeed better.

Many thanks for any working code examples!

Decrease of number results between 2 operations

I'm facing a strange SQL problem, I have there 2 requests:

INSERT INTO tmp_copy SELECT champs FROM tmp WHERE champs IN (SELECT champs FROM myTable)

COPY (SELECT * FROM myTable WHERE champs IN (SELECT champs FROM tmp_copy)) TO 'bla/bla.csv' WITH DELIMITER ';';

This operation is as simple as it looks like.

The first request put in the table "tmp_copy" the common lines between tmp and myTable, and the second request put in a file the inverse operation.

But in the process, I lost a lot of results and I don't know why. For me the number result should be the same...

Thanks for help !

Subquery in JPQL (multiple select)

I have deux tables in my datables. The Account table and the AccountType table. I need a JPQL query to retrieve all accounts with column kind(in table AccountType) equal "BalanceSheet", I also need another one to retrieve all accounts witch the column kind(in table AccountType) equal "Outturn".

I try this put the don't works.

Query query = daoFactory
            .getEntityManager()
            .createQuery(
                    "SELECT a FROM Account a WHERE a.deleted=false AND a.type_id IN (SELECT id FROM "
                    + "AccountType WHERE deleted=false AND kind='BalanceSheet')");
List<Account> result = query.getResultList();

postgreSQL - clone or update table between 2 schemas

This is the question:

DB a) regenerated table updated once in a day (dropdb / createdb / create table) not managed, neither maneageable

DB b) should show same data as a but be constantly accessed

how can I get the data in a to b ?

psql test -c \
"\copy (SELECT i, t FROM original_table ORDER BY i) TO STDOUT" | \
psql test -c "\copy copy_table (i, t) FROM STDIN"

this one is editable to take all columns as "*" would in a standard query?

Close Heroku Postgres Connections

I am hosting my app on heroku. I am using the postgres add-on. When I am testing the site, the connections start adding up even though I'm the only using it. How do I solve this?

I am using python, flask, and sqlalchemy.

How to import postgres relations into django 1.8

I have an existing django database in postgres and it has several relationships. I have a new model in django postgres and I want to import some of the tables from existing database which have relationships. Do I need to map old fields to the new model? How do I preserve relationships? Is there a tool that helps in the process?

Thanks

Convert keep dense_rank from Oracle query into postgres

Given the following table:

"ID" NUMBER(11,0) NOT NULL ENABLE,
"DATE_TIME" TIMESTAMP (6) NOT NULL ENABLE,
"GLOBAL_TRANSACTION_ID" VARCHAR2(40 BYTE) NOT NULL ENABLE,
"REPOST_FLAG" CHAR(1 CHAR) DEFAULT 'N' NOT NULL ENABLE,
"ENVIRONMENT" VARCHAR2(20 BYTE),
"TRANSACTION_MODE" VARCHAR2(20 BYTE),
"STATUS" VARCHAR2(20 BYTE) NOT NULL DISABLE, 
"STEP" VARCHAR2(80 BYTE), 
"EVENT" VARCHAR2(20 BYTE) NOT NULL DISABLE, 
"EVENT_CODE" VARCHAR2(20 BYTE), 
"EVENT_SUBCODE" VARCHAR2(20 BYTE), 
"SUMMARY" VARCHAR2(200 BYTE), 
"BUSINESS_IDENTIFIER" VARCHAR2(80 BYTE), 
"ALTERNATE_BUSINESS_IDENTIFIER" VARCHAR2(80 BYTE), 
"DOMAIN" VARCHAR2(20 BYTE) NOT NULL DISABLE, 
"PROCESS" VARCHAR2(80 BYTE) NOT NULL DISABLE, 
"SERVICE_NAME" VARCHAR2(80 BYTE), 
"SERVICE_VERSION" VARCHAR2(20 BYTE), 
"DETAIL" CLOB, 
"APP_NAME" VARCHAR2(80 BYTE), 
"APP_USER" VARCHAR2(20 BYTE), 
"HOST_NAME" VARCHAR2(80 BYTE), 
"THREAD_NAME" VARCHAR2(200 BYTE), 

I'm trying to convert the following Oracle query into Postgres

select
        this_.GLOBAL_TRANSACTION_ID as y0_,
        this_.BUSINESS_IDENTIFIER as y1_,
        this_.ENVIRONMENT as y2_,
        count(*) as y3_,
        this_.HOST_NAME as y4_,
        listagg(process,
        ', ') within
    group (order by
        date_time) as process,
        min(this_.DATE_TIME) as y6_,
        max(this_.DATE_TIME) as y7_,
        max(status)keep(dense_rank last
    order by
        date_time,
        decode(status,
        'COMPLETED',
        'd',
        'FAILED',
        'c',
        'TERMINATED',
        'b',
        'STARTED',
        'a',
        'z')) as status
    from
        ACTIVITY_MONITOR_TRANSACTION this_
    where
        this_.DATE_TIME between ? and ?
        and 1=1
    group by
        this_.GLOBAL_TRANSACTION_ID,
        this_.BUSINESS_IDENTIFIER,
        this_.ENVIRONMENT,
        this_.HOST_NAME,
        global_transaction_id,
        business_identifier,
        global_transaction_id,
        business_identifier
    order by
        y7_ asc

the problem is I don't know how to convert this block:

max(status)keep(dense_rank last
    order by
        date_time,
        decode(status,
        'COMPLETED',
        'd',
        'FAILED',
        'c',
        'TERMINATED',
        'b',
        'STARTED',
        'a',
        'z')) as status

The aim of this block is to get the latest status, and in case of exact same time (it is possible!) assign the status following the order above.

This is an example of data:
      ID        DATA_TIME          GLOBAL_TRANSACTION_ID   STATUS
===================================================================
 54938456;"2015-04-20 09:39:27";"8d276718-eca7-4fd0-a266 ;"STARTED"
 54938505;"2015-04-20 09:39:27";"8d276718-eca7-4fd0-a266 ;"COMPLETED"
 54938507;"2015-04-20 09:39:27";"8d276718-eca7-4fd0-a266 ;"FAILED"

The status should be "COMPLETED" so my query should return, among other rows, the following:

 GLOBAL_TRANSACTION_ID    (...)     STATUS
=============================================
 8d276718-eca7-4fd0-a266  (...)     COMPLETED

I have tried splitting the query into 2:

select
    this_.GLOBAL_TRANSACTION_ID as y0_,
    this_.BUSINESS_IDENTIFIER as y1_,
    this_.ENVIRONMENT as y2_,
    count(*) as y3_,
    this_.HOST_NAME as y4_,
    array_to_string(array_agg(distinct process),
    ',') as process,
    min(this_.DATE_TIME) as y6_,
    max(this_.DATE_TIME) as y7_,
    max(this_.STATUS) as y8_
from
    ACTIVITY_MONITOR_TRANSACTION this_
where
    this_.DATE_TIME between ? and ?
group by
    this_.GLOBAL_TRANSACTION_ID,
    this_.BUSINESS_IDENTIFIER,
    this_.ENVIRONMENT,
    this_.HOST_NAME,
    global_transaction_id,
    business_identifier
order by
    y7_ desc limit ?

and then

select
    status
from
    activity_monitor_transaction
where
    GLOBAL_TRANSACTION_ID=?
order by
    date_time DESC,
    CASE status
        WHEN 'COMPLETED'THEN 'd'
        WHEN 'FAILED' THEN 'c'
        WHEN 'TERMINATED' THEN 'b'
        WHEN 'STARTED' THEN 'a'
        ELSE 'z'
    END DESC LIMIT 1

But this is causing me performance issues as I have to execute the second query once per row.

Is there any way to mimic the keep dense_rank block into postgres in order to have just one query?

PostgreSQL/JooQ bulk insertion performance issues when loading from CSV; how do I improve the process?

For this project, I intend to make a web version and am right now working on making a PostgreSQL (9.x) backend from which the webapp will query.

Right now, what happens is that the tracer generates a zip file with two CSVs in it, load it into an H2 database at runtime whose schema is this (and yes, I'm aware that the SQL could be written a little better):

create table matchers (
    id integer not null,
    class_name varchar(255) not null,
    matcher_type varchar(30) not null,
    name varchar(1024) not null
);

alter table matchers add primary key(id);

create table nodes (
    id integer not null,
    parent_id integer not null,
    level integer not null,
    success integer not null,
    matcher_id integer not null,
    start_index integer not null,
    end_index integer not null,
    time bigint not null
);

alter table nodes add primary key(id);
alter table nodes add foreign key (matcher_id) references matchers(id);
create index nodes_parent_id on nodes(parent_id);
create index nodes_indices on nodes(start_index, end_index);

Now, since the PostgreSQL database will be able to handle more than one trace, I had to add a further table; the schema on the PostgreSQL backend looks like this (less than average SQL alert as well; also, in the parse_info table, the content column contains the full text of the file parsed, in the zip file it is stored separately):

create table parse_info (
    id uuid primary key,
    date timestamp not null,
    content text not null
);

create table matchers (
    parse_info_id uuid references parse_info(id),
    id integer not null,
    class_name varchar(255) not null,
    matcher_type varchar(30) not null,
    name varchar(1024) not null,
    unique (parse_info_id, id)
);

create table nodes (
    parse_info_id uuid references parse_info(id),
    id integer not null,
    parent_id integer not null,
    level integer not null,
    success integer not null,
    matcher_id integer not null,
    start_index integer not null,
    end_index integer not null,
    time bigint not null,
    unique (parse_info_id, id)
);

alter table nodes add foreign key (parse_info_id, matcher_id)
    references matchers(parse_info_id, id);
create index nodes_parent_id on nodes(parent_id);
create index nodes_indices on nodes(start_index, end_index);

Now, what I am currently doing is taking existing zip files and inserting them into a postgresql database; I'm using JooQ and its CSV loading API.

The process is a little complicated... Here are the current steps:

  • a UUID is generated;
  • I read the necessary info from the zip (parse date, input text) and write the record in the parse_info table;
  • I create temporary copies of the CSV in order for the JooQ loading API to be able to use it (see after the code extract as to why);
  • I insert all matchers, then all nodes.

Here is the code:

public final class Zip2Db2
{
    private static final Pattern SEMICOLON = Pattern.compile(";");
    private static final Function<String, String> CSV_ESCAPE
        = TraceCsvEscaper.ESCAPER::apply;

    // Paths in the zip to the different components
    private static final String INFO_PATH = "/info.csv";
    private static final String INPUT_PATH = "/input.txt";
    private static final String MATCHERS_PATH = "/matchers.csv";
    private static final String NODES_PATH = "/nodes.csv";

    // Fields to use for matchers zip insertion
    private static final List<Field<?>> MATCHERS_FIELDS = Arrays.asList(
        MATCHERS.PARSE_INFO_ID, MATCHERS.ID, MATCHERS.CLASS_NAME,
        MATCHERS.MATCHER_TYPE, MATCHERS.NAME
    );

    // Fields to use for nodes zip insertion
    private static final List<Field<?>> NODES_FIELDS = Arrays.asList(
        NODES.PARSE_INFO_ID, NODES.PARENT_ID, NODES.ID, NODES.LEVEL,
        NODES.SUCCESS, NODES.MATCHER_ID, NODES.START_INDEX, NODES.END_INDEX,
        NODES.TIME
    );

    private final FileSystem fs;
    private final DSLContext jooq;
    private final UUID uuid;

    private final Path tmpdir;

    public Zip2Db2(final FileSystem fs, final DSLContext jooq, final UUID uuid)
        throws IOException
    {
        this.fs = fs;
        this.jooq = jooq;
        this.uuid = uuid;

        tmpdir = Files.createTempDirectory("zip2db");
    }

    public void removeTmpdir()
        throws IOException
    {
        // From java7-fs-more (http://ift.tt/1GcqTYT)
        MoreFiles.deleteRecursive(tmpdir, RecursionMode.KEEP_GOING);
    }

    public void run()
    {
        time(this::generateMatchersCsv, "Generate matchers CSV");
        time(this::generateNodesCsv, "Generate nodes CSV");
        time(this::writeInfo, "Write info record");
        time(this::writeMatchers, "Write matchers");
        time(this::writeNodes, "Write nodes");
    }

    private void generateMatchersCsv()
        throws IOException
    {
        final Path src = fs.getPath(MATCHERS_PATH);
        final Path dst = tmpdir.resolve("matchers.csv");

        try (
            final Stream<String> lines = Files.lines(src);
            final BufferedWriter writer = Files.newBufferedWriter(dst,
                StandardOpenOption.CREATE_NEW);
        ) {
            // Throwing below is from throwing-lambdas
            // (http://ift.tt/1wjtgOE)
            lines.map(this::toMatchersLine)
                .forEach(Throwing.consumer(writer::write));
        }
    }

    private String toMatchersLine(final String input)
    {
        final List<String> parts = new ArrayList<>();
        parts.add('"' + uuid.toString() + '"');
        Arrays.stream(SEMICOLON.split(input, 4))
            .map(s -> '"' + CSV_ESCAPE.apply(s) + '"')
            .forEach(parts::add);
        return String.join(";", parts) + '\n';
    }

    private void generateNodesCsv()
        throws IOException
    {
        final Path src = fs.getPath(NODES_PATH);
        final Path dst = tmpdir.resolve("nodes.csv");

        try (
            final Stream<String> lines = Files.lines(src);
            final BufferedWriter writer = Files.newBufferedWriter(dst,
                StandardOpenOption.CREATE_NEW);
        ) {
            lines.map(this::toNodesLine)
                .forEach(Throwing.consumer(writer::write));
        }
    }

    private String toNodesLine(final String input)
    {
        final List<String> parts = new ArrayList<>();
        parts.add('"' + uuid.toString() + '"');
        SEMICOLON.splitAsStream(input)
            .map(s -> '"' + CSV_ESCAPE.apply(s) + '"')
            .forEach(parts::add);
        return String.join(";", parts) + '\n';
    }

    private void writeInfo()
        throws IOException
    {
        final Path path = fs.getPath(INFO_PATH);

        try (
            final BufferedReader reader = Files.newBufferedReader(path);
        ) {
            final String[] elements = SEMICOLON.split(reader.readLine());

            final long epoch = Long.parseLong(elements[0]);
            final Instant instant = Instant.ofEpochMilli(epoch);
            final ZoneId zone = ZoneId.systemDefault();
            final LocalDateTime time = LocalDateTime.ofInstant(instant, zone);

            final ParseInfoRecord record = jooq.newRecord(PARSE_INFO);

            record.setId(uuid);
            record.setContent(loadText());
            record.setDate(Timestamp.valueOf(time));

            record.insert();
        }
    }

    private String loadText()
        throws IOException
    {
        final Path path = fs.getPath(INPUT_PATH);

        try (
            final BufferedReader reader = Files.newBufferedReader(path);
        ) {
            return CharStreams.toString(reader);
        }
    }

    private void writeMatchers()
        throws IOException
    {
        final Path path = tmpdir.resolve("matchers.csv");

        try (
            final BufferedReader reader = Files.newBufferedReader(path);
        ) {
            jooq.loadInto(MATCHERS)
                .onErrorAbort()
                .loadCSV(reader)
                .fields(MATCHERS_FIELDS)
                .separator(';')
                .execute();
        }
    }

    private void writeNodes()
        throws IOException
    {
        final Path path = tmpdir.resolve("nodes.csv");

        try (
            final BufferedReader reader = Files.newBufferedReader(path);
        ) {
            jooq.loadInto(NODES)
                .onErrorAbort()
                .loadCSV(reader)
                .fields(NODES_FIELDS)
                .separator(';')
                .execute();
        }
    }

    private void time(final ThrowingRunnable runnable, final String description)
    {
        System.out.println(description + ": start");
        final Stopwatch stopwatch = Stopwatch.createStarted();
        runnable.run();
        System.out.println(description + ": done (" + stopwatch.stop() + ')');
    }

    public static void main(final String... args)
        throws IOException
    {
        if (args.length != 1) {
            System.err.println("missing zip argument");
            System.exit(2);
        }

        final Path zip = Paths.get(args[0]).toRealPath();

        final UUID uuid = UUID.randomUUID();
        final DSLContext jooq = PostgresqlTraceDbFactory.defaultFactory()
            .getJooq();

        try (
            final FileSystem fs = MoreFileSystems.openZip(zip, true);
        ) {
            final Zip2Db2 zip2Db = new Zip2Db2(fs, jooq, uuid);
            try {
                zip2Db.run();
            } finally {
                zip2Db.removeTmpdir();
            }
        }
    }
}

Now, here is my first problem... It is much slower than loading into H2. Here is a timing for a CSV containing 620 matchers and 45746 nodes:

Generate matchers CSV: start
Generate matchers CSV: done (45.26 ms)
Generate nodes CSV: start
Generate nodes CSV: done (573.2 ms)
Write info record: start
Write info record: done (311.1 ms)
Write matchers: start
Write matchers: done (4.192 s)
Write nodes: start
Write nodes: done (22.64 s)

Give or take, and forgetting the part about writing specialized CSVs (see below), that is 25 seconds. Loading this into an on-the-fly, disk-based H2 database takes less than 5 seconds!

The other problem I have is that I have to write dedicated CSVs; it appears that the CSV loading API is not really flexible in what it accepts, and I have, for instance, to turn this line:

328;SequenceMatcher;COMPOSITE;token

into this:

"some-randome-uuid-here";"328";"SequenceMatcher";"COMPOSITE";"token"

But my biggest problem is in fact that this zip is pretty small. For instance, I have a zip with not 620, but 1532 matchers, and not 45746 nodes, but more than 34 million nodes; even if we dismiss the CSV generation time (the original nodes CSV is 1.2 GiB), since it takes 20 minutes for H2 injection, multiplying this by 5 gives a time some point south of 1h30mn, which is a lot!

All in all, the process is quite inefficient at the moment...


Now, in the defence of PostgreSQL:

  • constraints on the PostgreSQL instance are much higher than those on the H2 instance: I don't need a UUID in generated zip files;
  • H2 is tuned "insecurely" for writes: jdbc:h2:/path/to/db;LOG=0;LOCK_MODE=0;UNDO_LOG=0;CACHE_SIZE=131072.

Still, this difference in insertion times seems a little excessive, and I am quite sure that it can be better. But I don't know where to start.

Also, I am aware that PostgreSQL has a dedicated mechanism to load from CSVs, but here the CSVs are in a zip file to start with, and I'd really like to avoid having to create a dedicated CSV as I am currently doing... Ideally I'd like to read line by line from the zip directly (which is what I do for H2 injection), transform the line and write into the PostgreSQL schema.

Finally, I am also aware that I currently do not disable constraints on the PostgreSQL schema before insertion; I have yet to try this (will it make a difference?).

So, what do you suggest I do to improve the performance?

Can't rename postgres database that has mixed case name

The database's name is Parkes and I want to rename it Parkes1

I have tried

myconn.Execute "ALTER DATABASE ""Parkes"" RENAME TO ""Parkes1"";"

and pretty much every variety I can think of but they all return the message,

error near '"Parkes"'

meanwhile if I try

myconn.Execute "ALTER DATABASE Parkes RENAME TO Parkes1;"

the message says

unable to find database parkes

I can't see what I am doing wrong, it's postresql 9.4 that I am using and I am using ADO via the postgresql odbc library from vb6 to try to do the rename. I can rename it fine using pgadmin3 but I need to be able to do it programmatically.

Django DateTimeField User Input

So I'm trying to populate a model in django using a postgres (postgis) database. The problem I'm having is inputting the datetimefield. I have written a population script but every time I run it I get the error django.db.utils.IntegrityError: null value in column "pub_date" violates not-null constraint. The code below shows my model and the part of the population script that applies to the table.

The model:

class Article(models.Model):
    authors = models.ManyToManyField(Author)
    location = models.ForeignKey(Location)
    article_title = models.CharField(max_length=200, unique_for_date="pub_date")
    pub_date = models.DateTimeField('date published')
    article_keywords = ArrayField(ArrayField(models.CharField(max_length=20, blank=True), size=8), size=8,)
    title_id = models.CharField(max_length=200)
    section_id = models.CharField(max_length=200)

And the population script:

def populate():
    add_article(
        id = "1",
        article_title = "Obama scrambles to get sceptics in Congress to support Iran nuclear deal",
        pub_date = "2015-04-06T20:38:59Z",
        article_keywords = "{obama, iran, debate, congress, america, un, republican, democrat, nuclear, isreal}",
        title_id = "white-house-scrambles-sceptics-congress-iran-nuclear-deal",
        section_id = "us-news",
        location_id = "1"
        )

def add_article(id, article_title, pub_date, article_keywords, title_id, section_id, location_id):
    article = Article.objects.get_or_create(article_title=article_title)[0]
    article.id
    article.article_title
    article.pub_date
    article.article_keywords
    article.title_id
    article.section_id
    article.location_id
    article.save()
    return article

if __name__ == '__main__':
    print "Starting Newsmap population script..."

    populate()

I've searched around for ages but there seems to be no solution to this specific problem. Any help much appreciated!!

Creating a Null relational date type in PostgreSQL

I am converting a MySQL application that contains relationals of the form:

mydate < '2000-01-01'::date

MySQL will return true in this instance if mydate is Null and PostgreSQL will not. There are thousands of lines like this, too many and too varied to modify. I am investigating using a custom type to emulate this behavior, and I would like to utilize built-in I/O functions for Date as a base instead of rolling my own. Am I on the right track, and where do I find the documentation for them?

Multi tables relations in SELECT request

How can I do something like this:

SELECT *
FROM table1, table2, table3
WHERE table1.champs = table2.champs = table3.champs

And without any duplicate "champs" because at least one of the 3 tables should have duplicate lines.

Rails order by count on association

I have 2 models. Gif, GifStatistic

gif has_many :gif_statistics

GifStatistics has column called state. It can be either like or dislike

What i want to achieve is to query gifs, but order them in highest count of likes

Something like(Pseudo code)

Gif.joins(:gif_statistics).order(count(gif.gif_statistics.state))

How do i achieve this?

Sql injection attacks in geoserver

I'm using geoserver's functionality for dynamic postgresql queries. I have a query with a fully dynamic where clause. For example, you don't even know the exact number of geometric features, or relationships between them that you have to select from the database.

what's the best way to ensure an sql injection attack cannot be made?

Currently, there's a query of the form Select * from foo f,foo f2 %where%

and the where parameters is using many postgis functions and a combination of f,f2 and other geometric entries.

Postgres requires providing Primary key explicitly though the type is serial

I have a table in postgres defined like this:

CREATE TABLE t_sample (
    sample_pk SERIAL PRIMARY KEY, 
    lis_pkref bigint NOT NULL,
    patient_pkref bigint NOT NULL,  
    sample_lis_id  varchar(50) NOT NULL
);

If I try to execute an insert query:

INSERT INTO t_sample VALUES(0, 0, 'S123' )

it fails with error:

LINE 5: INSERT INTO t_sample VALUES(0, 0, 'S123' )
                                          ^
********** Error **********

ERROR: not valid syntax for integer: „S123“ (I've translated it)
SQL Status:22P02

This query works:

INSERT INTO t_sample VALUES(0, 0, 0, 'S123' )

Doesn't serial create the value for the column automatically like auto_increment in MySQL? How can I achieve this?

Comparing values yields different results

I have a script which reads data in from a csv into a pd dataframe. It then iterates each row and passes the row as a pd series to another module. Here, one of the columns is evaluated to see if it is bigger than a value contained in another pd series, eg:

df_1:

col_A, col_B, col_C
234.0, 563.2, 565.5
565.7, 324.3, 5676.4

df_2:

col_X, col_Y, col_Z
124.1, 763.5, 562.1

In the above example, the first row of the dataframe is selected and sent to a function which checks to see if df_1['Col_A'] (ie: 234.0) is bigger than df_2['col_X'] (ie: 124.1). This all works perfectly.

My problem comes now that I have changed the script to read in the original dataframe from a PostgreSQL db instead of a csv file. Everything else has remained the same. The comparison appears to be doing nothing,....it doesn't evaluate to True or False, it just skips the evaluation completely.

The original code to compare the two values (each contained in a pd series) which worked correctly when reading in from csv is:

if df_1['col_A'] > df_2['col_X']:
    #do something

I have checked the types of the two values both when reading in from csv and from postgresql. It is comparing:

<class 'float'> and <class 'numpy.float64'>

The values stored in the database are of type numeric(10,2).

I have tried the following to no avail:

if df_1.loc['col_A'] > df_2.loc['col_X']
and
if Decimal(df_1.loc['col_A']) > Decimal(df_2.loc['col_X'])
and
if abs(df_1.loc['col_A']) > abs(df_2.loc['col_X'])

Im completely stumped since the only thing that has changed is getting the data from a database instead of a csv. The resulting datatypes are still the same, ie: float compared against numpy.float64

Need Help in Configuring Report Scheduler JasperServer

I have JasperServer Community Edition 5.5.0a installed in my Linux server. It is working fine. So I now want to set up report scheduler to be able to have the report in the mail box daily, weekly or monthly. I followed the link here. So in file js.quartz.properties I have this:

quartz.delegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate

quartz.tablePrefix=QRTZ_

quartz.extraSettings=

report.scheduler.mail.sender.host=mail.mydomain.com
report.scheduler.mail.sender.username=myname@mydomain.com
report.scheduler.mail.sender.password=mypassword
report.scheduler.mail.sender.from=myname@mydomain.com
report.scheduler.mail.sender.protocol=smtp
report.scheduler.mail.sender.port=xxx

report.scheduler.web.deployment.uri=http://xxxxxxxxxx:8080/jasperserver

So I need to restart the JasperServer. Please refer below my finding when I run command through terminal:

  • ./ctlscript.sh stop and got warning postgresql could not be stopped
  • kill -9 postgresql so the postgresql is killed
  • ./ctlscript.sh stop and got warning postgresql not running
  • ./ctlscript.sh start and got warning postgresql could not be started

I am confused on several things. Does JasperServer requires Postgresql? I don't use Postgresql and till now I can still fetch my report through PHP code. But when I schedule built-in example of JasperServer report, still I don't receive the report in my mailbox. Can somebody help me on how to configure this?

How can I optimize this group wise max sql query?

This is what I want. It basically grabs all records from the daily_statistics table and groups by user_id. At the same time, it does the following:

  1. The values of the user are grouped by most recent
  2. attachment_ids are represented as an array, so I can determine how many attachments the user has

The result:

 user_id | country_id |       time_at       | assumed_gender |    attachment_ids
---------+------------+---------------------+----------------+----------------------
   21581 |        172 | 2015-04-18 17:55:00 |                | [5942]
   21610 |        140 | 2015-04-18 19:55:00 | male           | [5940]
   22044 |        174 | 2015-04-18 21:55:00 | female         | [12312313, 12312313]

   21353 |        174 | 2015-04-18 20:59:00 | male           | [5938]
   21573 |        246 | 2015-04-18 21:57:00 | male           | [5936]
(5 rows)

The follwoing query executes slow. Something like 17 seconds.

  SELECT
    ds.user_id,
    max(case when id=maxid then country_id end) AS country_id,
    max(case when id=maxid then time_at end) AS time_at,
    max(case when id=maxid then properties->'assumed_gender' end) AS assumed_gender,
    json_agg(to_json(attachment_id)) AS attachment_ids
  FROM daily_statistics ds JOIN (
      SELECT u.id as user_id, (
        SELECT ds2.id FROM daily_statistics ds2 WHERE ds2.user_id=u.id AND ds2.metric = 'participation' AND ds2.status = 'active' AND ds2.campaign_id = 39
        ORDER BY ds2.id DESC LIMIT 1
      ) AS maxid FROM users u
      WHERE u.properties -> 'provider' IN ('twitter')
  ) mu ON (ds.user_id=mu.user_id)
  WHERE ds.campaign_id = 39 AND ds.metric = 'participation' AND ds.status = 'active'
  GROUP BY ds.user_id;

The issue is with the group wise max statement. Is there a way to optimize this query and get the same output? I was thinking of using some kind of lateral join? But then I wouldn't be able to get the number of attachment_id's per user.

Thoughts would be appreciated.

Sqoop + Postgresql: how to prevent quotes around table name

I am trying to import a table from Postgresql to a Parquet file on HDFS.

Here is what I do:

sqoop import \
    --connect "jdbc:postgresql://pg.foo.net:5432/bar" \
    --username user_me --password $PASSWORD \
    --table foo.bar \
    --target-dir /user/me/bar \
    --as-parquetfile

and I get

INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "foo.bar" AS t LIMIT 1
ERROR manager.SqlManager: Error executing statement: org.postgresql.util.PSQLException: ERROR: relation "foo.bar" does not exist

SELECT t.* FROM "foo.bar" AS t LIMIT 1 does not work indeed, but SELECT t.* FROM foo.bar AS t LIMIT 1 does. So the problem is that table name is quoted. I tried supplying --table argument different ways, but with no effect.

How do I work it around?

Postgres keeps transaction idle after killing the program that created it

My Postgres 9.2 seems to keep some transactions in idle status, even after killing the program the created it.

Example: I open a transaction in a program, then I debug the program from Visual Studio, then I stop the debugging and the program is killed.

But the PgAdmin still show the transaction as idle:

enter image description here

No VSHost or similar process are active, the transaction remains idle even after closing the Visual Studio or shutting down the pc.

Rails where clause with array argumens

I have some model with some joins, which ends with

array = ['table1.field = 1', 'table2.field = 2', ...]
mycollection.where(array)

But this ends up only with first element on array in where clause. Is there any way to pass all elements of array, so rails would join it with AND condition ?

Migrating data from PostgreSQL to Oracle along with the constraints

I have successfully migrated data (test schema) from PostgreSQL to Oracle using the PostgreSQL drivers.

All the tables are migrated but the constraints are not reflected in Oracle. My question is how migrate the constraints along with the tables. I am using windows 7 OS.

Thanks in advance.

Get libpq runtime version (from libpqxx)

In the libpqxx reference I can read sometimes stuff that is limited to the underlying libpq version ("Requires libpq version from PostgreSQL 7.4 or better. ") like here.

Now the question(s): (1) How to obtain the libpq version used in the current program (using libpqxx)? (2) As far as I found out, libpqxx not necessarily need to be recompiled when moving forward to a new libpq release. Replacing the old libpq.dll covers (often) new technologies, like ssl compression in my case. That is why I am thinking the libpq version shall be received on runtime.

How to do indexing of two join tables in Postgres?

I have two very big tables for example tab1 and tab2 in Postgres. I need to perform join and group by operation on these tables, to make my query fast i need indexing. Is there any way how to use index over join and group by queries ?? (as I heard indexing is not possible over join and group by)

Geoserver sql view parameters messing up postgis function calls

I have setup an openlayers html window that communicates with the geoserver platform. Geoserver is connected to the database through a parametric view where the entire "where" clause is a parameter.

My problem is that when I create an sql query on my database, of the form "Select b.the_geom from "Beta" b where b.point_id < 100" it works.

If I send a view parameter to geoserver:

  • > Select b.the_geom from "Beta" b %parameter% < -

and the parameter is the where clause of the sql query I just demonstrated in the beginning, everything works perfectly.

So I know that my program can send the parameter (the complete "where" clause) and figure the query out. For example, it shows 3 out of 4 points when the id is < 40 and all 4 if the id is < 100.

However, when I use any postgis function to compare geometries, the sql query inside pgadmin works perfectly, and returns the same kind of data as the successful query in the first section. For example, it returns all the geometries with the same the_geom numbers.

However, when I actually send the where clause that includes a postgis function like ST_CROSSES(b.the_geom,c.the_geom) from html to the geoserver, nothing shows in the openlayers window.

How could this be?

Inserting geometry (JSON) into loopbackjs model

I am adding and editing geometry in loopback with the angular SDK into PostGIS, and I have the following working fine:

ProjectGeoms.upsert({"id":"129","projectGeom":   {"type":"Polygon","coordinates":[[arrays of x/y co-ordinates]],"crs":{"type":"name","properties":{"name":"EPSG:4326"}}}}, successCallback, errorCallback)

The SQL query being run is:

loopback:connector:postgresql SQL: UPDATE "public"."project_geom" SET  "ref"=$1,"project_geom"=$2
Parameters: 129,{"type":"Polygon","coordinates":[[arrays of x/y co-ordinates]],"crs":{"type":"name","properties":{"name":"EPSG:4326"}}} +2ms

This is for a geometry that already exists. However, when I add it for a geometry which doesn't exist (the same JSON structure, but with an id 124), then it does not work.

The SQL query being run is:

loopback:connector:postgresql SQL: INSERT INTO "public"."project_geom" ("ref","project_geom") SELECT $1,$2 RETURNING 
Parameters: 124,{"type":"Polygon","coordinates":[[arrays of x/y co-ordinates]],"crs":{"type":"name","properties":{"name":"EPSG:4326"}}} +2ms

The full error message is below:

loopback:connector:postgresql error: syntax error at end of input
at Connection.parseE (C:\Bitnami\wappstack-5.4.30-   0\apache2\htdocs\project\api\node_modules\loopback-connector-postgresql\node_modules\pg\lib\connection.js:534:11)
at Connection.parseMessage (C:\Bitnami\wappstack-5.4.30-0\apache2\htdocs\project\api\node_modules\loopback-connector-postgresql\node_modules\pg\lib\connection.js:361:17)
at Socket.<anonymous> (C:\Bitnami\wappstack-5.4.30-0\apache2\htdocs\project\api\node_modules\loopback-connector-postgresql\node_modules\pg\lib\connection.js:105:22)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:765:14)
at Socket.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:427:10)
at emitReadable (_stream_readable.js:423:5)
at readableAddChunk (_stream_readable.js:166:9)
at Socket.Readable.push (_stream_readable.js:128:10) +15ms

If I run the following query in Postgres directly:

INSERT INTO project_geom (ref,project_geom) VALUES (124,'{"type":"Polygon","coordinates":[[arrays of x/y co-ordinates]],"crs":{"type":"name","properties":{"name":"EPSG:4326"}}}'::json);

I am able to insert the value without issue.

The view schema is (ref::integer, project_geom::json)

Insert string array into postgres text array field

I'm trying to make an article taggable.

Article table:

type Article struct {
  ID int64
  Body string
  Tags string
}

Preparing values:

tags := r.FormValue("tags")
tagArray := fmt.Sprintf("%q", strings.Split(tags, ", ")) // How do I make use of this?

t := Article{
    Body: "this is a post",
    Tags: `{"apple", "orange"}`,    // I have to hard code this for this to work.
}
if err := t.Insert(Db); err != nil {
   // Error handling
}

Database query:

func (t *Article) Insert(db *sqlx.DB) error {
    nstmt, err := db.PrepareNamed(`INSERT INTO articles
    (body, tags)
    VALUES (:body, :tags)
    RETURNING *;
    `)
    if err != nil {
        return err
    }
    defer nstmt.Close()

    err = nstmt.QueryRow(t).StructScan(t)
    if err, ok := err.(*pq.Error); ok {
        return err
    }
    return err
}

Postgres setup for tags field:

tags character varying(255)[] DEFAULT '{}',

It seems like I have to hard code the value for tags for it to work. Otherwise I would get errors like these:

pq: missing dimension value
OR
pq: array value must start with "{" or dimension information

How do I make use of tagArray?

Helpful reference: http://ift.tt/1yK423Y

Current role name in prompt

I add to my PROMPT1 and PROMPT2 the %n parameter to show the current user. I thought that it works like SELECT CURRENT_USER; but when I changed my role with SET ROLE role_name; I noticed that user in my prompt doesn't changed but in CURRENT_USER is role name as I expected.

Is there any way to display in prompt the current role instead of current user?

Employee database design with Normalization

I need to design a database which should have tables with the below fields,with respect to each normalization rule.

Employee ID,
First Name,
Middle Name,
Last Name,
Date of Birth,
Address Type,
Address Line 1,
Address Line 2,
Address Line 3,
City,
State,
Pin Code,
Type of Identification,
Identification Number,
Issued Authority,
Date of Issue,
Date of Expiry,
Place of Issue,

And An Employee can have any number of address and Identification information.Please help me.

Connect With PostgreSQL in Phalcon Framework

Phalcon is not able to connect to postgrsql. Here are my settings in config.php

return new \Phalcon\Config(array(
    'database' => array(
        'adapter'     => 'Postgresql',
        'host'        => 'localhost',
        'username'    => 'postgres',
        'password'    => 'root',
        'dbname'      => 'mydb',
        'charset'     => 'utf8',
    ),
    'application' => array(
        'controllersDir' => __DIR__ . '/../../app/controllers/',
        'modelsDir'      => __DIR__ . '/../../app/models/',
        'viewsDir'       => __DIR__ . '/../../app/views/',
        'pluginsDir'     => __DIR__ . '/../../app/plugins/',
        'libraryDir'     => __DIR__ . '/../../app/library/',
        'cacheDir'       => __DIR__ . '/../../app/cache/',
        'baseUri'        => '/test/',
    )
));

Page is blank showing no errors.

DI service implementation

use Phalcon\Db\Adapter\Pdo\Postgresql as DbAdapter;

$di->set('db', function () use ($config) {
    return new DbAdapter(array(
        'host' => $config->database->host,
        'username' => $config->database->username,
        'password' => $config->database->password,
        'dbname' => $config->database->dbname,
        "charset" => $config->database->charset
    ));
});

WCF webservice- GetElecteurs method returning a blank page

I've developed a WCF webservice with only one method so far, the method is supposed to get all "Electeur" data from my postgresql database. And I'm supposed to call this webservice afterwards, consume it in my jQuery mobile app and display the data in a datatable. I tried following this tutorial while developing the webservice : http://ift.tt/1ue8L8a

When I build the webservice everything seems fine, but when I view it in the browser by navigating to : http://localhost:20913/Service1.svc/GetElecteurs, I get a blank page, I don't know what's wrong or if I had missed something? I don't know also how am I supposed to display this fetched data in my jQuery mobile client app as a datatable. Any help is much appreciated! Thanks in advance.

P.S: I'm using Postgresql that's why I couldn't use LINQ in my code, instead I've copied Npgsql.dll in the bin folder of my folder and added it as a reference, then I used the npgsql commands to connect to my database and fetch the data..

Here's my code :

Service1.svc.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
using Npgsql;
using System.Data;
using System.Configuration;

namespace WcfService1
{
// NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Service1" in code, svc and config file together.
public class Service1 : IService1
{
    // To use HTTP GET, add [WebGet] attribute. (Default ResponseFormat is WebMessageFormat.Json)
    // To create an operation that returns XML,
    //     add [WebGet(ResponseFormat=WebMessageFormat.Xml)],
    //     and include the following line in the operation body:
    //         WebOperationContext.Current.OutgoingResponse.ContentType = "text/xml";

    // Add more operations here and mark them with [OperationContract]
    // The connexion
    NpgsqlConnection cnx = new NpgsqlConnection("Server=localhost;User Id=postgres;Password=*****;Database=electionscasa;");

    public List<Electeurs> GetElecteurs()
    {
        List<Electeurs> ElecteursInscrits = new List<Electeurs>();
        {
            //Openning the connexion string
            cnx.Open();

            //Connecting to the database and fetching data : List of all the voters
            NpgsqlCommand cmd = new NpgsqlCommand("SELECT cinelecteur, nomelecteur, prenomelecteur, adresseelecteur, professionelecteur FROM electeur", cnx);
            cmd.ExecuteNonQuery();

            NpgsqlDataAdapter da = new NpgsqlDataAdapter(cmd);
            DataTable dt = new DataTable();
            //Populating the datatable
            da.Fill(dt);
            cnx.Close();
        }
        return ElecteursInscrits;
    }
}
}

IService1.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;

namespace WcfService1
{
[ServiceContract]
public interface IService1
{
    [OperationContract]
    [WebInvoke(Method = "GET", ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Wrapped, UriTemplate = "GetElecteurs")]
    List<Electeurs> GetElecteurs();
}

[DataContract]
public class Electeurs
{
    string IdElecteur;
    string CinElecteur;
    string NomElecteur;
    string PrenomElecteur;
    string AdresseElecteur;
    string ProfessionElecteur;

    [DataMember]
    public string IdElect
    {
        get { return IdElecteur; }
        set { IdElecteur = value; }
    }

    [DataMember]
    public string CinElect
    {
        get { return CinElecteur; }
        set { CinElecteur = value; }
    }
    [DataMember]
    public string NomElect
    {
        get { return NomElecteur; }
        set { NomElecteur = value; }
    }
    [DataMember]
    public string PrenomElect
    {
        get { return PrenomElecteur; }
        set { PrenomElecteur = value; }
    }
    [DataMember]
    public string AdresseElect
    {
        get { return AdresseElecteur; }
        set { AdresseElecteur = value; }
    }
    [DataMember]
    public string ProfessionElect
    {
        get { return ProfessionElecteur; }
        set { ProfessionElecteur = value; }
    }
}
}

web.config

<?xml version="1.0"?>
<configuration>

<system.web>
<compilation debug="true" targetFramework="4.0" />
</system.web>
<system.serviceModel>

<!-- We need to add the following <services> tag inside the <serviceModel> -->
<services>
  <service name="JSONWebService.Service1">
    <endpoint address="../Service1.svc"
      binding="webHttpBinding"
      contract="JSONWebService.IService1"
      behaviorConfiguration="webBehaviour" />
  </service>
</services>

<behaviors>
  <serviceBehaviors>
    <behavior>
      <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
      <serviceMetadata httpGetEnabled="true"/>
      <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
      <serviceDebug includeExceptionDetailInFaults="false"/>
    </behavior>
  </serviceBehaviors>

  <!--  //In the <behaviors> tag, we need to add the following <endpointBehaviors> tag-->
  <endpointBehaviors>
    <behavior name="webBehaviour">
      <webHttp/>
    </behavior>
  </endpointBehaviors>

</behaviors>
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
</system.serviceModel>

<system.webServer>
<!-- If we're going to be hosting the web services on a particular domain, but accessing the services from some JavaScript on a different domain, 
    then we need to make a further change to the web.config file by adding these following lines
   <httpProtocol>
     <customHeaders>
       <add name="Access-Control-Allow-Origin" value="*" />
       <add name="Access-Control-Allow-Headers" value="Content-Type, Accept" />
     </customHeaders>
   </httpProtocol>
-->
<modules runAllManagedModulesForAllRequests="true"/>
</system.webServer>

</configuration>